Студопедия
Новини освіти і науки:
МАРК РЕГНЕРУС ДОСЛІДЖЕННЯ: Наскільки відрізняються діти, які виросли в одностатевих союзах


РЕЗОЛЮЦІЯ: Громадського обговорення навчальної програми статевого виховання


ЧОМУ ФОНД ОЛЕНИ ПІНЧУК І МОЗ УКРАЇНИ ПРОПАГУЮТЬ "СЕКСУАЛЬНІ УРОКИ"


ЕКЗИСТЕНЦІЙНО-ПСИХОЛОГІЧНІ ОСНОВИ ПОРУШЕННЯ СТАТЕВОЇ ІДЕНТИЧНОСТІ ПІДЛІТКІВ


Батьківський, громадянський рух в Україні закликає МОН зупинити тотальну сексуалізацію дітей і підлітків


Відкрите звернення Міністру освіти й науки України - Гриневич Лілії Михайлівні


Представництво українського жіноцтва в ООН: низький рівень культури спілкування в соціальних мережах


Гендерна антидискримінаційна експертиза може зробити нас моральними рабами


ЛІВИЙ МАРКСИЗМ У НОВИХ ПІДРУЧНИКАХ ДЛЯ ШКОЛЯРІВ


ВІДКРИТА ЗАЯВА на підтримку позиції Ганни Турчинової та права кожної людини на свободу думки, світогляду та вираження поглядів



Major tusks in NLP

The following is a list of some of the most commonly researched tasks in NLP. Note that some of these tasks have direct real-world applications, while others more commonly serve as subtasks that are used to aid in solving larger tasks. What distinguishes these tasks from other potential and actual NLP tasks is not only the volume of research devoted to them but the fact that for each one there is typically a well-defined problem setting, a standard metric for evaluating the task, standard corpora on which the task can be evaluated, and competitions devoted to the specific task.

 

· Automatic summarization: Produce a readable summary of a chunk of text. Often used to provide summaries of text of a known type, such as articles in the financial section of a newspaper.

· Coreference resolution: Given a sentence or larger chunk of text, determine which words ("mentions") refer to the same objects ("entities"). Anaphora resolution is a specific example of this task, and is specifically concerned with matching up pronouns with the nouns or names that they refer to.

· Discourse analysis: This rubric includes a number of related tasks. One task is identifying the discourse structure of connected text, i.e. the nature of the discourse relationships between sentences (e.g. elaboration, explanation, contrast). Another possible task is recognizing and classifying the speech acts in a chunk of text

· Machine translation: Automatically translate text from one human language to another. This is one of the most difficult problems, and is a member of a class of problems colloquially termed "AI-complete", i.e. requiring all of the different types of knowledge that humans possess (grammar, semantics, facts about the real world, etc.) in order to solve properly.

· Morphological segmentation: Separate words into individual morphemes and identify the class of the morphemes. The difficulty of this task depends greatly on the complexity of the morphology (i.e. the structure of words) of the language being considered. English has fairly simple morphology, especially inflectional morphology, and thus it is often possible to ignore this task entirely and simply model all possible forms of a word (e.g. "open, opens, opened, opening") as separate words. In languages such as Turkish, however, such an approach is not possible, as each dictionary entry has thousands of possible word forms.

· Named entity recognition (NFR): Given a stream of text, determine which items in the text map to proper names, such as people or places, and what the type of each such name is (e.g. person, location, organization). Note that, although capitalization can aid in recognizing named entities in languages such as English, this information cannot aid in determining the type of named entity, and in any case is often inaccurate or insufficient. For example, the first word of a sentence is also capitalized, and named entities often span several words, only some of which are capitalized. Furthermore, many other languages in non-Western scripts (e.g. Chinese or Arabic) do not have any capitalization at all, and even languages with capitalization may not consistently use it to distinguish names. For example, German capitalizes all nouns, regardless of whether they refer to names,

· Natural language generation (генерування природньої мови): Convert information from computer databases into readable human language.

· Natural language understanding: Convert chunks of text into more formal representations such as first-order logic structures that are easier for computer programs to manipulate. Natural language understanding involves the identification of the intended semantic from the multiple possible semantics which can be derived from a natural language expression which usually takes the form of organized notations of natural languages concepts. Introduction and creation of language metamodel (високорівнева модель) and ontology are efficient however empirical solutions. An explicit formalization of natural languages semantics without confusions with implicit assumptions such as closed world assumption (CWA) vs. open world assumption, or subjective Yes/No vs. objective True/False is expected for the construction of a basis of semantics formalization.

· Optical character recognition (OCR) (розпізанвання оптичного символу): Given an image, representing printed text, determine the corresponding text.

· Part-of-speech tagging (тегування, маркування частин мови): Given a sentence, determine the part of speech for each word. Many words, especially common ones, can serve as multiple parts of speech. Note that some languages have more such ambiguity than others. Languages with little inflectional morphology (морфологія змінних частин слова), such as English are particularly prone to such ambiguity. Such inflection is not readily conveyed via the entities employed within the orthography to convey intended meaning.

· Parsing (синтаксичний аналіз, парсинг): Determine the parse tree (grammatical analysis) of a given sentence. The grammar for natural languages is ambiguous and typical sentences have multiple possible analyses. In fact, perhaps surprisingly, for a typical sentence there may be thousands of potential parses (most of which will seem completely nonsensical to a human).

· Question answering: Given a human-language question, determine its answer. Typical questions have a specific right answer (such as "What is the capital of Canada?"), but sometimes open-ended questions are also considered (such as "What is the meaning of life?”).

· Relationship extraction (визначення взаємозв'язків): Given a chunk of text, identify the relationships among named entities.

· Sentence breaking (визначення меж речення) (also known as sentence boundary disambiguation): Given a chunk of text, find the sentence boundaries. Sentence boundaries are often marked by periods or other punctuation marks, but these same characters can serve other purposes.

· Sentiment analysis (змістовий аналіз): Extract subjective information usually from a set of documents, often using online reviews to determine "polarity" about specific objects. It is especially useful for identifying trends of public opinion in the social media, for the purpose of marketing.

· Speech recognition: Given a sound clip of a person or people speaking, determine the textual representation of the speech. This is the opposite of text to speech and is one of the extremely difficult problems colloquially termed "Al-complete" (see above). In natural speech there are hardly any pauses between successive words, and thus speech segmentation is a necessary subtask of speech recognition. Note also that in most spoken languages, the sounds representing successive letters blend into each other in a process termed coarticulation (артикуляція кількох звуків разом), so the conversion of the analog signal to discrete characters can be a very difficult process.

· Speech segmentation (розмежування мовлення): Given a sound clip of a person or people speaking, separate it into words. A subtask of speech recognition and typically grouped with it.

· Topic segmentation and recognition (визначення тематики): Given a chunk of text, separate it into segments each of which is devoted to a topic, and identify the topic of the segment.

· Word segmentation (сегментація слів): Separate a chunk of continuous text into separate words. For a language like English, this is fairly trivial, since words are usually separated by spaces. However, some written languages like Chinese, Japanese and Thai do not mark word boundaries in such a fashion, and in those languages text segmentation is a significant task requiring knowledge of the vocabulary and morphology of words in the language.

· Word sense disambiguation (підбір значення слова): Many words have more than one meaning; we have to select the meaning which makes the most sense in context.

 

In some cases, sets of related tasks are grouped into subfields of NLP that are often considered separately from NLP as a whole. Examples include:

Ø Information retrieval (IR)( доставка інформації): This is concerned with storing, searching and retrieving information. It is a separate field within computer science (closer to databases), but IR relies on some NLP methods (for example, stemming). Some current research and applications seek to bridge the gap between IR and NLP.

Ø Information extraction (IE) (здобуття інформації): This is concerned in general with the extraction of semantic information from the text. This covers tasks such as named entity recognition, coreference resolution, relationship extraction, etc.

Ø Speech processing: This covers speech recognition, text-to-speech and related tasks.

Ø

Other tasks include:

• Stemming (морфологічний пошук)

• Text simplification (спрощення тексту)

• Text-proofing

• Natural language search (пошук природньої мови)

• Query expansion (експансія запиту)

• Automated essay scoring автоматизована оцінка ессе)

• Truecasing

 

2. Planning.

 

Automated planning and scheduling is a branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space.

In known environments with available models, planning can be done offline. Solutions can be found and evaluated prior to execution. In dynamically unknown environments, the strategy often needs to be revised online. Models and policies must be adapted. Solutions usually resort to iterative (ітеративний, той, що повторюється) trial and error (через спроби та помилки) processes commonly seen in artificial intelligence. These include dynamic programming (динамічне програмування), reinforcement learning (стимульоване навчання) and combinatorial optimization (комбінаторна оптимізація). Languages used to describe planning and scheduling are often called action languages.

The difficulty of planning is dependent on the simplifying assumptions employed. Several classes of planning problems can be identified depending on the properties the problems have in several dimensions.

• Are the actions deterministic or nondeterministic? For nondeterministic actions, are the associated probabilities available?

· Are the state variables discrete or continuous? If they are discrete, do they have only a finite number of possible values?

· Can the current state be observed unambiguously? There can be full observability and partial observability.

· How many initial states are there?

· Do actions have a duration?

· Can several actions be taken concurrently, or is only one action possible at a time?

· Is the objective of a plan to reach a designated goal state, or to maximize a reward function?

· Is there only one agent or are there several agents? Are the agents cooperative or selfish? Do all of the agents construct their own plans separately, or are the plans constructed centrally for all agents?

 

The simplest possible planning problem, known as the Classical Planning Problem, is determined by:

ü a unique known initial state,

ü durationless actions,

ü deterministic actions,

ü which can be taken only one at a time,

ü and a single agent.

 

Since the initial state is known unambiguously, and all actions are deterministic, the state of the world after any sequence of actions can be accurately predicted, and the question of observability is irrelevant for classical planning.

Further, plans can be defined as sequences of actions, because it is always known in advance which actions will be needed.

With nondeterministic actions or other events outside the control of the agent, the possible executions form a tree, and plans have to determine the appropriate actions for eve/y node of the tree.

Discrete-time Markov decision processes (MDP) are planning problems with:

· durationless actions,

· nondeterministic actions with probabilities,

· full observability,

· maximization of a reward function,

· and a single agent.

 

When full observability is replaced by partial observability, planning corresponds to partially observable Markov decision process (POMDP).

If there are more than one agent, we have multi-agent planning, which is closely related to game theory.




Переглядів: 728

<== попередня сторінка | наступна сторінка ==>
 | Planning languages

Не знайшли потрібну інформацію? Скористайтесь пошуком google:

  

© studopedia.com.ua При використанні або копіюванні матеріалів пряме посилання на сайт обов'язкове.


Генерація сторінки за: 0.008 сек.