Peter schulz eibach meet

der Riesenbaum, The Official Newsletter of the Porsche Club of America, Redwood Region

LEAN STARTUP: Wisdom from Steve Blank + 'Meet the Authors' Session. Panel Mentors, Meet Ups and Camps .. Bregman, Peter Eibach, Philipp. Peter \ Alternative Ways of Seeking Knowledge in Consumer ResearchLaurel . S. Brown, P. McDonagh and C.J. Shultz II \ Street-Level Drug Market Activity in in MotionKatja Franko Aas \ Global Flows Meet 'Criminology of the Other' \ Illicit and Richard Eibach \ Inner-City Children in Sharper FocusGregory Stanczak. The meeting was attended by nearly participants that represented a diverse cross Oyhantçabal, Pedro; Wagner-Eimer, Martin; Wemmer, Klaus; Schulz, Bernhard; Eibach, Daniel; Nagel, Michael; Hogan, Benedikt; Azuure, Clinton; .

An example of a digital assistant is described in Applicant's U. As shown in FIG. The DA client communicates with the DA server through one or more networks The DA client provides client-side functionalities such as user-facing input and output processing and communications with the DA-server The DA server provides server-side functionalities for any number of DA-clients each residing on a respective user device The one or more processing modules utilize the data and models to determine the user's intent based on natural language input and perform task execution based on inferred user intent.

In some implementations, the DA-server communicates with external services through the network s for task completion or information acquisition. Examples of the user device include, but are not limited to, a handheld computer, a personal digital assistant PDAa tablet computer, a laptop computer, a desktop computer, a cellular telephone, a smart phone, an enhanced general packet radio service EGPRS mobile phone, a media player, a navigation device, a game console, a television, a remote control, or a combination of any two or more of these data processing devices or other data processing devices.

More details on the user device are provided in reference to an exemplary user device shown in FIG. The communication network s may be implemented using any known network protocol, including various wired or wireless protocols, such as e. The server system is implemented on one or more standalone data processing apparatus or a distributed network of computers. Although the digital assistant shown in FIG. In addition, the divisions of functionalities between the client and server portions of the digital assistant can vary in different implementations.

For example, in some implementations, the DA client is a thin-client that provides only user-facing input and output processing functions, and delegates all other functionalities of the digital assistant to a backend server.

The user device includes a memory interfaceone or more processorsand a peripherals interface The various components in the user device are coupled by one or more communication buses or signal lines. The user device includes various sensors, subsystems, and peripheral devices that are coupled to the peripherals interface For example, a motion sensora light sensorand a proximity sensor are coupled to the peripherals interface to facilitate orientation, light, and proximity sensing functions.

One or more other sensorssuch as a positioning system e. In some implementations, a camera subsystem and an optical sensor are utilized to facilitate camera functions, such as taking photographs and recording video clips. An audio subsystem is coupled to speakers and a microphone to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions. The touch-screen controller is coupled to a touch screen The touch screen and the touch screen controller can, for example, detect contact and movement or break thereof using any of a plurality of touch sensitivity technologies, such as capacitive, resistive, infrared, surface acoustic wave technologies, proximity sensor arrays, and the like.

In some implementations, the memory interface is coupled to memory In some implementations, the memory stores an operating systema communication modulea user interface modulea sensor processing modulea phone moduleand applications The operating system includes instructions for handling basic system services and for performing hardware dependent tasks.

The user interface module facilitates graphic user interface processing and output processing using other output channels e. The sensor processing module facilitates sensor-related processing and functions.

The phone module facilitates phone-related processes and functions. As described in this specification, the memory also stores client-side digital assistant instructions e.

In various implementations, the digital assistant client module is capable of accepting voice input e. The digital assistant client module is also capable of providing output in audio e. During operation, the digital assistant client module communicates with the digital assistant server using the communication subsystems In some implementations, the digital assistant client module includes a speech synthesis module The speech synthesis module synthesizes speech outputs for presentation to the user.

The speech synthesis module synthesizes speech outputs based on text provided by the digital assistant. For example, the digital assistant generates text to provide as an output to a user, and the speech synthesis module converts the text to an audible speech output. The speech synthesis module uses any appropriate speech synthesis technique in order to generate speech outputs from text, including but not limited to concatenative synthesis, unit selection synthesis, diphone synthesis, domain-specific synthesis, formant synthesis, articulatory synthesis, hidden Markov model HMM based synthesis, and sinewave synthesis.

In some implementations, the speech synthesis module stores canonical pronunciations for certain words. In some implementations, multiple possible pronunciations are stored for a given word, including user-specified pronunciations. As described herein, the pronunciation that is ultimately selected for synthesis is determined based on any of several possible factors or combinations thereof e.

In some implementations, where a user has provided a correct or preferred pronunciation for a word e. Techniques for acquiring and processing user-specified pronunciations are discussed herein.

In some implementations, user-specified pronunciations for use by the speech synthesis module are represented using a speech synthesis phonetic alphabet e. In some implementations, the user-specified pronunciations are stored in the user data For example, user-specified pronunciations of the names of contacts in a user's electronic address book or contact list are stored in association with the respective contacts.

Name of presentation placeholder

User-specified pronunciations may be visible or hidden to the user. For example, a user can select a user-specified pronunciation and modify, alter, or replace it, using text or speech inputs. In some implementations, user-specified pronunciations of other words e. Thus, in some implementations, any words for which the user wishes to specify a particular pronunciation are accessible by the speech synthesis module In some implementations, user-specified pronunciations are stored remotely from the user devicesuch as in a remote server or cloud-based service e.

Accordingly, user-specified pronunciations of words are accessible to a user via multiple user devices. This also helps increase the perceived intelligence of the digital assistant, because once a user specifies a particular pronunciation of a word or name, the digital assistant can use the correct pronunciation regardless of whether the user is interacting with the digital assistant on her smart phone or other computing device, e.

In some implementations, user-specified pronunciations are stored both locally e. In some implementations, user-specified pronunciations for a particular user are copied to a user device upon authentication of the device to access an account associated with the user. For example, user-specified pronunciations stored on the server system may be associated with a particular user account, and when a device becomes associated with that user account e.

In some implementations, instead of or in addition to using the local speech synthesis modulespeech synthesis is performed on a remote device e. For example, this occurs in some implementations where outputs for a digital assistant are generated at a server system. And because server systems generally have more processing power or resources than a user device, it may be possible to obtain higher quality speech outputs than would be practical with client-side synthesis.

In some implementations, the digital assistant client module provides the context information or a subset thereof with the user input to the digital assistant server to help infer the user's intent. In some implementations, the digital assistant also uses the context information to determine how to prepare and delivery outputs to the user. In some implementations, the context information that accompanies the user input includes sensor information, e. In some implementations, the context information also includes the physical state of the device, e.

In some implementations, information related to the software state of the user devicee. In some implementations, the DA client module selectively provides information e.

In some implementations, the digital assistant client module also elicits additional input from the user via a natural language dialogue or other user interfaces upon request by the digital assistant server In various implementations, the memory includes additional instructions or fewer instructions. In some implementations, the digital assistant system is implemented on a standalone computer system. In some implementations, the digital assistant system is distributed across multiple computers.

In some implementations, some of the modules and functions of the digital assistant are divided into a server portion and a client portion, where the client portion resides on a user device e. It should be noted that the digital assistant system is only one example of a digital assistant system, and that the digital assistant system may have more or fewer components than shown, may combine two or more components, or may have a different configuration or arrangement of the components.

The various components shown in FIG. These components communicate with one another over one or more communication buses or signal lines In some implementations, e. In some implementations, the digital assistant system represents the server portion of a digital assistant implementation, and interacts with the user through a client-side portion residing on a user device e.

The wired communication port s receive and send communication signals via one or more wired interfaces, e. In some implementations, memoryor the computer readable storage media of memorystores programs, modules, instructions, and data structures including all or a subset of: The operating system e.

The communications module facilitates communications between the digital assistant system with other devices over the network communications interface For example, the communication module may communicate with the communication interface of the device shown in FIG. The user interface module also prepares and delivers outputs e.

For example, if the digital assistant system is implemented on a standalone user device, the applications may include user applications, such as games, a calendar application, a navigation application, or an email application.

If the digital assistant system is implemented on a server farm, the applications may include resource management applications, diagnostic applications, or scheduling applications, for example. The memory also stores the digital assistant module or the server portion of a digital assistant In some implementations, the digital assistant module includes the following sub-modules, or a subset or superset thereof: Each of these modules has access to one or more of the following data and models of the digital assistantor a subset or superset thereof: In some implementations, using the processing modules, data, and models implemented in the digital assistant modulethe digital assistant performs at least some of the following: In some implementations, as shown in FIG.

In some implementations, the context information also includes software and hardware states of the device e. The speech-to-text processing module or speech recognizer receives speech input e. In some implementations, the STT processing module uses various acoustic and language models to recognize the speech input as a sequence of phonemes, and ultimately, a sequence of words or tokens written in one or more languages.

In some implementations, the speech-to-text processing can be performed at least partially by a third party service or on the user's device. Once the STT processing module obtains the result of the speech-to-text processing, e.

System and method for user-specified pronunciation of words for speech synthesis and recognition

In some implementations, the STT module resides on a server computer e. Each word is associated with one or more candidate pronunciations of the word represented in a speech recognition phonetic alphabet. In some implementations, the candidate pronunciations are manually generated, e.

In some implementations, the candidate pronunciations are ranked based on the commonness of the candidate pronunciation. In some implementations, one of the candidate pronunciations is selected as a predicted pronunciation e.

When an utterance is received, the STT processing module attempts to identify the phonemes in the utterance e. As described herein, in some implementations, the STT processing module identifies phonemes in an utterance of a known word for the purpose of generating a user-specified pronunciation of the word. The STT processing module processes the utterance containing the preferred pronunciation to identify the phonemes in the utterance.

For example, a user may discover that the digital assistant cannot accurately recognize a particular contact's name. By specifying a preferred pronunciation for the name, the digital assistant, and specifically the speech-to-text processing modulewill thereafter accurately recognize the name in user utterances.

In some implementations, user-specified pronunciations for speech recognition by the speech-to-text processing module are stored in the vocabulary index In some implementations, user-specified pronunciations are also or instead stored in association with words in user data In some implementations, user-specified name pronunciations are visible to the user, while in implementations they are not.

In some implementations, user-specified pronunciations for use by the speech-to-text processing module are represented using a speech recognition phonetic alphabet e. In some implementations, the speech recognition phonetic alphabet corresponds to the set of phonemes that the STT processing module is capable of identifying in a recording of a spoken utterance. The phonetic alphabet conversion module converts phonetic representations of words between different phonetic alphabets.

Specifically, in some implementations, a speech recognizer e. Speech synthesizers and speech recognizers, therefore, cannot share a single phonetic representation because they use different phonetic alphabets. Thus, in some implementations, the phonetic alphabet conversion module converts phonetic representations from one phonetic alphabet e.

Accordingly, as described herein, a phonetic representation of a word that is determined using the STT processing module can be converted or mapped to a phonetic alphabet that is usable by the speech synthesis module The associated task flow is a series of programmed actions and steps that the digital assistant takes in order to perform the task.

In some implementations, in addition to the sequence of words or tokens obtained from the speech-to-text processing modulethe natural language processor also receives context information associated with the user request, e. As described in this specification, context information is dynamic, and can change with time, location, content of the dialogue, and other factors.

In some implementations, the natural language processing is based on e. A linkage between an actionable intent node and a property node in the ontology defines how a parameter represented by the property node pertains to the task represented by the actionable intent node.

EIBACH MEET THE WORLD FAMOUS 2017

In some implementations, the ontology is made up of actionable intent nodes and property nodes. Within the ontologyeach actionable intent node is linked to one or more property nodes either directly or through one or more intermediate property nodes.

Similarly, each property node is linked to one or more actionable intent nodes either directly or through one or more intermediate property nodes. For example, as shown in FIG. For example, the ontology shown in FIG. Each domain may share one or more property nodes with one or more other domains. In some implementations, the ontology may be modified, such as by adding or removing entire domains or nodes, or by modifying relationships between the nodes within the ontology The actionable intent nodes under the same super domain e.

For example, returning to FIG. The vocabulary index optionally includes words and phrases in different languages.

The natural language processor receives the token sequence e. In some implementations, the domain having the highest confidence value e. In some implementations, the domain is selected based on a combination of the number and the importance of the triggered nodes. In some implementations, additional factors are considered in selecting the node as well, such as whether the digital assistant has previously correctly interpreted a similar request from a user.

Regardless of the reasons for listening to speech associated with a document, conventional text-to-speech processing is often not able to impart to the user listener contextual information about the text that is being spoken. Further, in recent years, documents have become more complex and more diversified.

As a result, today's documents can have many different formats and contain various different document elements, including links, images, headings, tables, captions, footnotes, etc. Thus, there is a need to provide improved text-to-speech processing that can present contextual information to listeners. For users desiring to listen to documents while on-the-go, text-to-speech processing can generate audio output that can be listened to while on-the-go.

SXSW Schedule | jingle-bells.info

However, text-to-speech processing is processor-intensive, making it impractical for many portable devices that have limited processing power. Hence, there is also a need to manage creation, delivery and consumption of audio outputs that provide speech associated with documents. The improved text-to-speech processing can convert text from an electronic document into an audio output that includes speech associated with the text as well as audio contextual cues. The invention can be implemented in numerous ways, including as a method, system, device, or apparatus including a computer readable medium or a graphical user interface.

Several embodiments of the invention are discussed below. As a computer-implemented method for converting text to speech, one embodiment of the invention can, for example, include at least: As a computer-implemented method for converting text to speech, another embodiment of the invention can, for example, include at least: As a computer implemented method for generating an audio summary for a document, one embodiment of the invention can, for example, include at least: As a method for presenting a text-based document in an audio fashion, one embodiment of the invention can, for example, include at least: As a text-to-speech conversion system, one embodiment of the invention can, for example, include at least: As a computer readable storage medium including at least computer program code for converting text to speech tangibly embodied therein, one embodiment can, for example, include at least: Other aspects and advantages of the invention will become apparent from the following detailed description taken in conjunction with the accompanying drawings which illustrate, by way of example, the principles of the invention.

One aspect of the invention provides audio contextual cues to the listener when outputting speech spoken text pertaining to a document. The audio contextual cues can be based on an analysis of a document prior to a text-to-speech conversion. In other embodiment, audio contextual cues for the content of a document can also be imparted, for example, by any of: In one embodiment, the invention can process hyperlinks in a document in an intelligent manner.

In one implementation, when a block of text includes a hyperlink, a text-to-speech processor can indicate e. As one example, a low tone in the background can be played while a text-to-speech processor speaks the hyperlink. As still another example, a text-to-speech processor can use a distinct voice to let the user know that text being read is a hyperlink.

In one embodiment, audio contextual clues can be influenced by user preferences. Audio contextual cues can be, for example, set as user preferences in a software control panel associated with a text-to-speech processor.

According to another aspect of the invention, an audio summary can be generated for a file. The audio summary for a document can thereafter be presented to a user so that the user can hear a summary of the document without having to process the document to produce its spoken text via text-to-speech conversion.

Documents as used herein pertain to electronic documents. The electronic documents are electrically stored in an electronic file on a computer readable medium. For example, a document used herein can be of various different types and formats, including documents concerning text, word processing, presentation, webpage, electronic mail e-mailmarkup language, syndication, page description language, portable document format, etc.

Embodiments of the invention are discussed below with reference to FIGS. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes as the invention extends beyond these limited embodiments. The text-to-speech processing system includes a host computera portable media playerand a server computer The host computer can be connected to the portable media playerfor example, by using a USB cable or other cable, or by using a wireless network connection such as WiFi or Bluetooth.

The host computer can connect to the server computer over a networkfor example the Internet. The host computer can be connected to the network either by a cable, for example an Ethernet cable, or by using a wireless network connection. The host computer can include a file systemwhich is used to access files and directories on the host computer The host computer can also include one or more software applications, for example a media management applicationa network applicationand a text-to-speech conversion application or text-to-speech converter.

The media management application can be used to organize and present e. The media management application can also be used to manage the transfer of audio files between the host computer and the portable media playerfor example by performing a synching operation between the host computer and the portable media player For ease of use on the portable media playerthe audio files can be stored in a predetermined organization. For example, like types of documents e.

The network application can include any of a wide variety of network capable applications including, but not limited to, Web browsers, e-mail applications, and terminal applications. Also, the network application can be implemented as a module or part of the media management application The text-to-speech conversion application can be used to convert electronic documents e.

Alternately, the text-to-speech conversion application can be used generate speech output e. The generated speech output can be presented to a user using an audio output device The audio output device can be a sound card, for example, or other built-in sound hardware such as an audio output device built into a motherboard. Speech output can be presented to the user by way of a speaker or headphones, for example.

The text-to-speech conversion application can interact with a network application to present a webpage or the contents of an e-mail mailbox to the user In one embodiment, the text-to-speech conversion application can be used to convert documents, including webpages, RSS feeds, e-mails, text files, PDFs, or other documents having text into audio files at the host computer The text-to-speech conversion application can also be used to produce files that reside on the server computer The files that reside on the server computer can include audio files as well as any of the documents mentioned above.

The audio files can, in one embodiment, be copied from the host computer to the portable media player Further, the portable media player can be capable of presenting speech output to the user The text-to-speech processing system can be, for example, implemented by the text-to-speech conversion application of FIG.

The text-to-speech processing system can include a text-to-speech analyzer The text-to-speech analyzer can analyze a document and output a text-to-speech processing script The document text-to-speech analyzer can, for example, identify different elements of the documentsuch as the table of contents, publishing information, footnotes, endnotes, tables, figures, embedded video or audio, document abstract, hyperlinks, proprietary elements e.

The text-to-speech processing script can then be created by the text-to-speech analyzer with embedded audio context cues to be interpreted by a text-to-speech processor In one embodiment, the content of a document to be converted to speech can be rearranged in the text-to-speech processing script according to user preferences.

For example, footnotes in the document can be marked to be read in-line rather than at the bottom of the page, page numbers can be announced at the start of the page rather than at the end, a table of contents can be moved or omitted entirely, etc. The text-to-speech processor can output an audio file or can output speech directly. In one embodiment, in the case where the text-to-speech processor output is converted into an audio fileaudio chapter information can be inserted into the text-to-speech processing script for conversion into chapter or track markers within the audio file e.

The document text-to-speech processing script can be stored for later use. For example, the document text-to-speech script can be stored in a header of a file, in the directory that contains the file, or in some other linked file. In one embodiment, the document text-to-speech analyzer can resolve hyperlinks, either for immediate processing or for later use. In this case, a user can set a preference instructing the document text-to-speech analyzer how to resolve hyperlinks e.

Thus, references cited to within a document, for example in footnotes or endnotes, can be processed as well and inserted into the audio file by the text-to-speech processor In one embodiment, a text-to-speech processing script can be embedded in a document upon creation of the document, with the assumption that some users will want to have the document read to them rather than reading it themselves.

Alternatively, a standardized markup language e. For example, a creator author of a document can, in advance, pick the voice that a text-to-speech processor will use to read a document. In another example, a creator can pre-select voices for the dialogue of characters in a document, such as a book. In a third example, a webmaster seeking to design a webpage accessible to the visually impaired can incorporate commands to be processed by a text-to-speech processor, rather than relying on a document text-to-speech analyzer to correctly interpret his webpage design.

In the above description, such as illustrated in FIG. However, the text-to-speech analyzer and text-to-speech processor need not be separate. Further, the text-to-speech processing script is also not required in other embodiments. Thus, in one embodiment, a single software application combining the functions of the text-to-speech analyzer and the text-to-speech processor can process a document and output audio, either as speech output e. The file extractor can include a variety of modules capable of processing different types of documents For example, a file extractor can include an HTML file extractora PDF file extractora text file extractorand RSS extractorand an e-mail extractoras well as other modules for extracting other types of documents Microsoft Word files, RTF files, etc.

The file extractor can output the contents including at least text of an extracted file to a speech scripting generator The speech scripting generator can take text that has been extracted by a file extractor and apply heuristics e. The speech markup tags can indicate when different speech attributes e.

The speech scripting generator can output instructions i. For example, the audio file creator can incorporate a text-to-speech processor and a sound recording application, where the output of the text-to-speech processor is recorded and stored as an audio file.

In an alternate embodiment, the audio file creator can output speech to present to a listener, for example by using an audio output device as described above in reference to FIG. The audio file creation process can be implemented using, for example, the text-to-speech processing system of FIG. The audio file creation process begins by selecting a document for conversion into an audio file. A document can be any electronic file or link that contains text.

Text files can be of any format, for example: The audio file creation process continues by parsing the selected document. For example, parsing can be used to identify the various text elements in the selected document, including, but not limited to, author information, document title, header text, footer text, body text, table captions, picture captions, abstract text, footnotes, endnotes, table of contents, hyperlinks, and copyright information.

In addition, parsing can involve identifying supplemental elements that may be present in the selected document. Examples of supplemental elements are markup tags, typesetting information, binary code, embedded video, pictures, proprietary content such as Flash or QuickTime, and metadata. In one embodiment, when hyperlinks are present, one or more hyperlinks can be opened and resolved during the parsing of the selected document.

As another example, if the selected document pertains to e-mail, parsing can include retrieving e-mails from a server. Once the document has been parsedthe document text is converted to speech consistent with the document parsing using a text-to-speech processor, for example the text-to-speech processor of FIG.

Different types of text elements can be converted to speech differently, using different speech cadence, inflection, or tone, or by indicating different types of text using auditory cues.

The audio file creation process continues by creating an audio file using the speech created by the text-to-speech conversion of stepfor example, by recording i. Alternately, a text-to-speech processor can create an audio file directly. Next, the audio file can be transferred to a media player application. Finally, the audio file can be transferred to a portable media player, for example by performing a synching operation between the portable media player, e. The transfer of the audio file to the portable media player can be managed using the media management application.

Alternately, the audio file can be transferred to a media player application directly, without first performing step In one embodiment, the audio file can be compressed before being transferred to the media player application. For example, in the audio interchange file format.

Alternately, in one embodiment, a compressed audio file can be created in stepthus eliminating the need for compression step The text to speech processing process begins by identifying text elements in a given document. The identifying of text elements in a document can include, for example, parsing the document as described in block of FIG. Other elements in the document, such as supplemental elements, including pictures, embedded video, markup language tags, and metadata, can also be identified The supplemental elements may also include text that is not normally presented to a reader when the document is displayed, such as copyright information or document revision information.

Next, the text-to-speech processing process determines which text elements will be spoken. Examples of text elements that can be spoken include, but are not limited to, titles, body text, footnotes, picture captions. Examples of text elements that might not be spoken include markup tags, tables of contents, and other text elements that may be difficult to convert to speech.

Those particular text elements that are not to be spoken can be designated as non-spoken text elements during the determination The text to speech processing process continues by determining the order in which to speak spoken elements.

For example, the text-to-speech processing process can determine that footnotes contained in a document are to be spoken in line i. Other examples of text elements that may be spoken in a different order than they occur in the text document include page numbers, which can be spoken at the beginning of the page rather than at the end, author information, and endnotes.

Next, audio cues that will accompany spoken elements can be determined Audio cues include audio contextual cues that are presented to the listener in order to better convey the content of a particular document. Audio contextual cues for the content of a document can also be imparted, for example, by altering the speed of the text as it is read, changing the voice used by the text-to-speech processor, playing a sound to announce a contextual change, speaking the text while a background noise is played, or altering the volume of the voice speaking the text.

Next, the spoken elements as determined in step are associated with the audio cues that were determined in step The association of the spoken elements and the audio clues can produce a tagged document or a speech script for use with a document. The text-parsing process can be used to resolve links e. For example, the text-parsing process can be performed by the text-to-speech analyzer of FIG. The text-parsing process begins by selecting text within an electronic document to be parsed.

Next, a determination determines if links e. For example, if a user can indicate e. Resolving a link can involve following the link to another document, following the link to another place on the same document, or simply determining where the link leads. In some cases, such as when the document being processed is a webpage, it may be undesirable to follow all links, since webpages sometimes contain numerous links.

In other cases, it may be desirable to resolve one or more links in-line, such as when the webpage contains footnotes. Alternatively, the text-parsing process may simply determine that a document contains one or more links without resolving any of the links.

Thus the determination can be used to determine which, if any, links are to be resolved in a particular document or block of text. If the decision determines that unresolved links are not to be resolved, the selected text is parsed and the parsing process ends.

Parsing can be, for example, the parsing as described in reference to FIG. On the other hand, if determination determines that one or more unresolved links are to be resolved, then the text-parsing process continues to decision which determines if there are unresolved links in the selected text. If decision determines that there are no unresolved links in the selected block of text, then the selected text is parsed and the text-parsing process ends.

Alternatively, if decision determines that there are unresolved links in the selected text, then the first unresolved link is resolved Next, a decision determines if the link that has been resolved is a link to new text. If decision determines that the resolved link is not a link to new text, for example if the link is a link e.