CA2281644A1 - Method and apparatus for training of sensory and perceptual systems in lli subjects - Google Patents

Method and apparatus for training of sensory and perceptual systems in lli subjects Download PDF

Info

Publication number
CA2281644A1
CA2281644A1 CA002281644A CA2281644A CA2281644A1 CA 2281644 A1 CA2281644 A1 CA 2281644A1 CA 002281644 A CA002281644 A CA 002281644A CA 2281644 A CA2281644 A CA 2281644A CA 2281644 A1 CA2281644 A1 CA 2281644A1
Authority
CA
Canada
Prior art keywords
subject
girl
block
stimulus
phoneme
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002281644A
Other languages
French (fr)
Inventor
William M. Jenkins
Michael M. Merzenich
Steven L. Miller
Bret E. Peterson
Paula Tallal
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Scientific Learning Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2281644A1 publication Critical patent/CA2281644A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/04Speaking
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/009Teaching or communicating with deaf persons
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training

Abstract

The apparatus and method incorporates a number of different programs to be played by the subject. The programs artificially process selected portions of language elements, called phonemes, so they will be more easily distinguished by an LLI subject, and gradually improves the subject's neurological processing of the elements through repetitive stimulation. The programs continually monitor a subject's ability to distinguish the processed language elements, and adaptively configures the programs to challenge and reward the subject by altering the degree of processing. Through adaptive control and repetition of processed speech elements, and presentation of the speech elements in a creative fashion, a subject's temporal processing of acoustic events common to speech are significantly improved.

Description

PCT/US98I2b528 METHOD AND APPARATUS FOR
TRAINING OF SENSORY AND PERCEPTUAL
SYSTEMS IN LLI SUBJECTS

This application is related to co-pending U.S. Patent Application Serial No.
08!992071 (Docket SLC:707C), filed December 17, 1997, entitled "METHOD AND APPARATUS FOR
TRAINING OF
SENSORY AND PERCEPTUAL SYSTEMS IN LLI SUBJECTS"; and U.S. Patent Application Serial No.
08/992072 (Docket SLC:707B), filed December 17, 1997, entitled "METHOD AND
APPARATUS FOR
TRAINING OF COGNITIVE AND MEMORY SYSTEMS IN HUMANS"; and U.S. Patent Application Serial No. 08/982189 (Docket SLC:707A) filed December 17, 1997, entitled "METHOD AND
APPARATUS FOR
TRAINING OF SENSORY AND PERCEPTUAL SYSTEMS IN LLI SUBJECTS", all assigned to Scientific Learning Corporation.
BACKGROUND OF THE INVENTION
_l. Field ofthe invention This invention relates in general to the field of education in language learning impaired (LLI) subjects, and more specifically to a computer program for training the auditory processing system in subjects having receptive language problems.
2. Descrivtion ofthe Related Ar1 Up to ten percent of children have language-learning impairments (LLI) resulting from the inability to accurately process short duration acoustic events at the rates that occur in normal speech. Their trouble distinguishing among elements of speech is neurologically based and has far reaching consequences: academic failure, emotional and disciplinary problems, and possibly diminished lifelong achievement and self image. No bracket of intelligence, race, gender or economic level is immune from this problem.
More specifically, Children with LLl have difficulty detecting and identifying sounds that occur simultaneously or in close proximity to each other - a phenomenon known as "masking." Because of masking, SU8ST1TUTE SHEET (RULE 26) children with LLI require sounds that are as much as ~i5 decibels more intense than the preceding or subsequent masking noise to distinguish and understand them. In addition, children with LLI are consistently poorer at detective a brief tone presented with a masking noise. particularly when the brief tone is turned on immediately prior to the masking noise. This phenomenon is called "backward masking."
Similarly. when the brief tone is turned on immediately after the masking noise a similar decrease in delectability can occur. This phenomenon is called "forward masking". For a tone to be detected by a child with LLI in the presence of a maskin_ noise.
the tone must be separated in time or frequency from the masking noise.
The inabiliy to accurately distinguish and process short duration sounds often causes children to fail behind in school. Since the children can't accurately interpret many language sounds, they can't remember which symbols represent which sounds. This deficiency causes difficulties in learning to read (translating from symbols to sounds), and in spelling. In fact, it is common for a child with LLl to fall two to three years behind his/her peers in speech, language and reading development.
One way children develop such auditory processing problems is from middle ear infections when they are young and beginning to develop the oral representations of language in the central auditory nervous system.
When a child has an ear infection, fluid can build up and block or muffle the sound wave entering the ear causing intermittent hearing loss. Even if the infection doesn't permanently damage the ear, the child's brain doesn't learn to process some sounds because it hasn't heard them accurately before, on a consistent basis. This typically occurs during a critical period of brain development when the brain is building the nerve connections necessary to accurately process acoustic events associated with normal speech.
Researchers believe that the auditory processing problem is essentially one of timing. Vowel sounds like /a/ and /e~ usually last at least 100 milliseconds and typically have constant frequency content. Consonants, on the other hand, typically have modulated frequency components, and last less than 40 milliseconds. Children with LLI cannot process these faster speech elements, especially the hard consonants like /t/, /p/, /d/ and /b/, if they occur either immediately before or after vowels, or if they are located near other consonants. Rather than ?5 hearing the individual sounds that make up a particular phoneme, children with LLl integrate closely associated sounds together over time. Since the duration of vowels are typically longer than consonants, the modulated frequency portions of consonants are often lost in the integration, an affect that may also hinder the resolution of the vowel, particularly short duration vowels.
This problem of abnormal temporal integration of acoustic events over time is not limited to children with LLI. Rather. the problem extends to stroke victims who have lost the neurological connections necessarv_ to process speech, as well as to individuals raised in one country. having one set of language phonemes, and attempting to learn the language of another country, having a distinct set of language phonemes. For example, it is known that an individual raised in Japan is not often presented with phonemes similar to the English r's and I's, because those consonants are not common in the Japanese language.
Similarly, there are many subtleties in the sounds made by a speaker of Japanese that are difficult to distinguish unless raised in Japan. The phonetic differences between languages are distinctions that must be learned, and are often ven~ difficult. But, they are clearly problems that relate to the temporal processing of short duration acoustic events.
SU8ST1TUTE SHEET (RULE 26) *rB

The above described temporal processing deficiency has little if anything to do with intelligence. In fact.
some LLI specialists argue that brains choosing this different route by which to absorb and reassemble bits of speech may actually stimulate creative intelligence. but at the expense of speech and reading problems.
Recent studies have shown that if the acoustic events associated with phonemes that are difficult to distinguish, such as iba/ and /dal, are slowed down, or that the consonant portion of the phonemes are emphasized, that students diagnosed as LLI can accurately distinguish between the phonemes. In addition, if the interval between two complex sounds is lengthened. LLI students are better able to process the sounds distinctly.
Heretofore, the solution to the processing problem has been to place LLI
students in extended special education and/or speech therapy training programs that focus on speech recognition and speech production. Or, more commonly, repetitive reading programs, phonic games, or other phonic programs are undertaken. These programs often last for years, with a success rate that is often more closely associated with the.skil) of the speech and language professional than with the program of study.
What is needed is a method and apparatus that allows a subject with abnormal temporal processing to train.
1 s or retrain their brain to recognize and distinguish short duration acoustic events that are common in speech.
Moreover, what is needed is a program that repetitively trains a subject to distinguish phonemes at a normal rate, by first stretching, and/or emphasizing elements of speech to the point that they are distinguishable, or separating speech elements in time, and then adaptively adjusting the stretching, emphasis and separation of the speech elements to the level of normal speech. The adaptive adjustments should be made so as to encourage the subject to continue with the repetitions, and the number of repetitions should be sufficient to develop the necessary neurological connections for normal temporal processing of speech.
Moreover, the program should provide acoustic signals to the brain that are better for phonetic training than normal human speech.
SUMMARY
To address the above-detailed deficiencies, the present invention provides a method for training the sensory, perceptual and cognitive systems in a human. The method repetitively provides a first acoustic event to the human, where the first acoustic event is stretched in the time domain. The method then sequentially provides a second acoustic event to the human for recognition. The method then requires the human to recognize the second acoustic event within a predetermined time window. If the human recognizes the second acoustic event within the predetermined time window, the amount of stretching applied to the first acoustic event is reduced.
By repetitively providing two acoustic events, stretched in the time domain, to the subject, and by reducin~~
the amount of stretching applied to the acoustic events, as the subject correctly distinguishes between the events.
the sensory, perceptual and cognitive systems of the human is trained.
(n another aspect, a method is provided for training an LL) subject to distinguish between frequency 3~ sweeps common in phonemes. The method presents a first frequency sweep that increases in frequency. The method also presents a second frequency sweep that decreases in frequency. The order of presenting the first SUBSTTTUTE SHEET (RULE 25) and second frequency sweeps is random. The first and second frequency sweeps are separated by an inter-stimulus interval (ISII. After presenting the frequency sweeps. the method requires an individual to recognize the order of presentation of the first and second frequency sweeps. The ISI
separating the tirst and second frequency sweeps is reduced or increased as the individual recognizes or fails to recognize the order of presentation, respectively. In addition, the duration of the first and second frequency sweeps is reduced as the individual repeatedly recognizes their order of presentation. By randomly presenting frequency sweeps to a subject, separated by an ISI. and by adaptively varying the duration of the sweeps, and the ISI accordin, to the correct or incorrect recognition by the subject, the subject is trained to better distinguish between common phonemes having similar frequency characteristics.
In yet another aspect, the present invention provides an adaptive method for improving a user's discrimination of short duration acoustic events. The method displays a plurality of graphical ima~_es that are associated with modified acoustic events. The graphical images are associated in pairs with particular modified acoustic events such that two different graphical images are associated with a particular modified acoustic event.
When any of the plurality of graphical images are selected. its associated modified acoustic event is presented.
1 S The method requires the user to discriminate between the acoustic events by sequentially selecting two different graphical images that are associated with the same modified acoustic event, from among all of the graphical images. When the user sequentially selects two images corresponding to the same modified acoustic event, those images are removed from the set of all graphical images. As the user continues to sequentially select two images corresponding to the same modified acoustic event, the number of graphical images displayed increases, and the amount which the acoustic events are modified is reduced.
The present invention also provides a method to train a subject to discriminate between similar acoustic events commonly found in spoken language. The method utilizes a computer for displaying images, for modifying the similar acoustic events, and for acoustically producing the modified similar acoustic events to the subject. The method selects a pair of words that have similar acoustic properties, displays a pair of graphical images representative of each of the pair of words, modifies one of the pair of words by stretching it in the time domain, acoustically produces the modified word to the subject, and requires the subject to select from the pair of graphical images, an image representative of the produced modified word. If the subject correctly selects the graphical image representative of the modified word, a different word is similarly presented. After repeated correct selections, the amount of stretching applied to the words is reduced.
In yet another aspect, it is a feature of the present invention to provide a method for repetitively and adaptively training a subject that has subnormal temporal acoustic processing capabilities to distinguish between phonemes that have similar acoustic characteristics. The method provides a plurality of phoneme pairs, where each pair has similar acoustic characteristics. For each of the plurality of phoneme pairs, the method provides a pair of associated graphic images. The method selects from among the plurality of phoneme pairs. a phoneme pair to be presented to the subject. The selected phoneme pair is then processed accordin_ to a predetermined skill level. After processing, the processed selected phoneme pair is presented to the subject. As a trial, the subject is required to recognize one of the processed phonemes from the selected phoneme pair by selecting its SUBSTITUTE SHEET. (PULE 26~
*rB

associated graphic image. Finally. the above trial is repeated. As the subject correctly recognizes the appropriate phoneme from the phoneme pair, the skill level is increased. That is. the amount of processin~~
applied to the phoneme pairs is reduced.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other objects. features, and advantages of the present invention will become better understood with regard to the following description. and accompanying drawings where:
FIGURE 1 is a block diagram of a computer system for executing a program according to the present invention.
FIGURE 2 is a block diagram of a computer network for executing a program according to the present invention.
FIGURE 3 is a chart illustrating frequency/energy characteristics of two phonemes within the English language.
FIGURE 4 is a chart illustrating auditory reception of a phoneme by a subject having normal receptive characteristics, and by a subject whose receptive processing is impaired.
I ~ FIGURE 5 is a chart illustrating stretching of a frequency envelope in time, according to the present invention.
FIGURE 6 is a chart illustrating emphasis of selected frequency components, according to the present invention.
FIGURE 7 is a chart illustrating up-down frequency sweeps of varying duration, separated by a selectable inter-stimulus-interval (ISI), according to the present invention.
FIGURE 8 is a pictorial representation of a game selection screen according to the present invention.
FIGURE 9 is a pictorial representation of a game entitled "Old MacDonald's Flying Farm" according to the present invention.
FIGURE 10 is a flow chart illustrating the adaptive auditory training procedures embodied in the game Old MacDonald's Flying Farm.
FIGURE'S 1 l and 12 are pictorial representations of a game entitled "Block Commander" according to the present invention.
FIGURE 13 is a flow chart illustrating the adaptive auditory training procedures embodied in the game Block Commander.
FIGURE's 14 and l5 are pictorial representations of a game entitled "Circus Sequence" according to the present invention.
FIGURE 16 is a flow chart illustrating the initial training procedures embodied in the game Circus Sequence.
FIGURE 17 is a flow chart illustrating the adaptive auditory training procedures embodied in the game 3 ~ Circus Sequence.
SU85T1TUTE SHEET (RULE 26) FIGURE 18 is a pictorial representation of a tame entitled "Phonic Match"
according to the present invention.
FIGURE 19 includes two tables illustrating the processing levels and the training levels embodied in the game Phonic Match.
FIGURE 20 is a flow chart illustrating the adaptive auditory training process embodied in the game Phonic Match.
FIGURE'S 21 and 32 are pictorial representations of a game entitled "Phonic Words" according to the present invention.
FIGURE 23 is a tlow chart illustrating the adaptive auditory training process embodied in the game Phonic Words.
FIGURE'S 24 and 25 are pictorial representations of a game entitled "Phoneme Identification" according;
to the present invention.
FIGURE 26 is a flow chart illustrating the initial training procedures embodied in the game Phoneme Identification.
FIGURE 27 is a flow chart illustrating the adaptive auditory training process embodied in the game Phoneme Identification.
FIGURE 28 is a pictorial representation of a game entitled "Language Comprehension Builder" according to the present invention.
FIGURE 29 is a flow chart illustrating the initial training procedures embodied in the game Language Comprehension Builder.
FIGURE 30 is a flow chart illustrating the adaptive auditory training procedures embodied in the game Language Comprehension Builder.
FIGURE 31 is a flow chart illustrating a time-scale modification algorithm for modifying acoustic elements according to the present invention.
FIGURE 32 is a flow chart illustrating a filter-bank summation emphasis algorithm for modifying acoustic elements according to the present invention.
FIGURE 33 is a flow chart illustrating an overlap-add emphasis algorithm for modifying acoustic elements according to the present invention.
DETAILED DESCRIPTION
Referring to Figure I, a computer system 100 is shown for executing a computer program to train, or retrain a learning langua~~e impaired (LLI) subject, according to the present invention. The computer system 100 contains a computer 102, having a CPU, memory, hard disk and CD ROM drive (not shown), attached to a monitor 104. The monitor 104 provides visual prompting and feedback to the subject during execution of the computer program. Attached to the computer 102 are a keyboard 105, speakers 106, a mouse 108, and headphones I 10. The speakers 106 and the headphones 1 10 provide auditory prompting and feedback to the subject durin_ execution of the computer program. The mouse 108 allows the subject to navigate through the SUBSTITUTE SHE~'1' (RULE 26) WO 99I31b40 PCT/US98/Z6518 computer program, and to select particular responses after visual or auditory prompting by the computer program. The keyboard 105 allows an instructor to enter alpha numeric information about the subject into the computer 10?. Although a number of different computer platforms are applicable to the present invention.
embodiments of the present invention execute on either IBM compatible computers or Macintosh computers.
Now referring to Figure 2, a computer network 200 is shown. The computer network 300 contains computers 202. 204, similar to that described above with reference to Figure I, connected to a server 206. The connection between the computers 202, 204 and the server 206 can be made via a local area network (LAN). a wide area network (WAN), or via modem connections, directly or through the Internet. A printer 208 is shown connected to the computer 202 to illustrate that a subject can print out reports associated with the computer program of the present invention. The computer network 200 allows information such as test scores, same statistics, and other subject information to flow from a subject's computer 202, 204 to a server 206.y An administrator can then review the information and can then download configuration and control information pertaining to a particular subject, back to the subject's computer 202, 204.
Details of the type of information passed between a subject's computer and a server are provided in co-pending U.S. Application No.
I S entitled "Remote Computer-Assisted Professionally Supervised Teaching System". assigned to Scientific Learning Corporation.
Before providing a detailed description of the present invention, a brief overview of certain components of speech will be provided, along with an explanation of how these components are processed by LLl subjects.
Following the overview, general information on speech processing will be provided so that the reader will better appreciate the novel aspects of the present invention.
Referring to Figure 3, a chart is shown that illustrates frequency components, over time, for two distinct phonemes within the English language. Although different phoneme combinations are applicable to illustrate features of the present invention, the phonemes /daa/ and /ba/ are shown. For the phoneme /da/, a downward sweep frequency component 302, at approximately 2.5 - 2 khz is shown to occur over a 35ms interval. In addition, a downward sweep frequency component 304, at approximately i khz is shown to occur during the same 3~ms interval. At the end of the 3~ms interval, a constant frequency component 306 is shown. whose duration is approximately I lOms. Thus, in producing the phoneme ida/, the stop consonant portion of the element /d/ is generated, having high frequency sweeps of short duration, followed by a long vowel element /a/
of constant frequency.
Also shown are frequency components for a phoneme /bay. This phoneme contains an upward sweep frequency component 308, at approximately 2khz, having a duration of approximately 35ms. The phoneme also contains an upward sweep frequency component 310, at approximately lkhz.
during the same 35ms period.
Following the stop consonant portion obi of the phoneme, is a constant frequency vowel portion 314 whose duration is approximately I lOms.
Thus, both the /ba/ and /da~ phonemes begin with stop consonants having modulated frequency components of relatively short duration. followed by a constant frequency vowel component of longer duration.
SUBSTITUTE 5HE~~' (RULE 26) The distinction between the phonemes exist primarily in the 2khz sweeps during the initial 35ms interval.
Similarity exists between other stop consonants such as na;. ; pa'. , ka- and ga .
Referring now to Figure 4, the amplitude of a phoneme. for example !bai, is viewed in the time domain. A
short duration high amplitude peak waveform 402 is created upon release of either the lips or the tongue when speaking the consonant portion of the phoneme, that rapidly declines to a constant amplitude signal of longer duration. For an individual with normal temporal processing, the waveform 402 will be understood and processed essentially as it is. However. for an individual who is learning-language impaired, or who has abnormal temporal processing, the short duration, higher frequency consonant burst will be integrated over time with the lower frequency vowel, and depending on the degree of impairment, will be heard as the waveform 404. The result is that the information contained in the higher frequency sweeps associated with consonant differences, will be muddled, or indistinguishable.
With the above general background of speech elements, and how LLI subjects process them, a general overview of speech processing will now be provided. As mentioned above, one problem that exists in LL!
subjects is the inability to distinguish between short duration acoustic events. If the duration of these acoustic t 5 events are stretched, in the time domain, it is possible to train LLI
subjects to distinguish between these acoustic events. An example of such time domain stretching is shown in Figure ~. to which attention is now directed.
In Figure 5, a frequency vs. time graph 500 is shown that illustrates a waveform 502 having short duration characteristics similar to the waveform 402 described above. Using existing computer technology, the analo~~
waveform 502 can be sampled and converted into digital values (using a Fast Fourier Transform, for example).
The values can then be manipulated so as to stretch the waveform in the time domain to a predetermined length, while preserving the amplitude and frequency components of the modified waveform. The modified waveform can then be converted back into an analog waveform (using an inverse FFT) for reproduction by a computer, or by some other audio device. The wavefotm 502 is shown stretched in the time domain to durations of 60ms (waveform 504), and 80ms (waveform 506). By stretching the consonant portion of the waveform 502 without effecting its frequency components, subjects with LLI can begin to hear distinctions in common phonemes.
Another method that may be used to help LLI subjects distinguish between phonemes is to emphasize selected frequency envelopes within a phoneme. Referring to Figure 6, a graph 600 is shown illustrating a frequency envelope 602 whose envelope varies by approximately 27hz. By detecting frequency modulated envelopes that vary from say 3-30 hz, similar to frequency variations in the consonant portion of phonemes, and selectively emphasizing those envelopes, they are made more easily detectable by LLI subjects. A 10 dB
emphasis of the envelope 602 is shown in waveform 604, and a 20 dB emphasis in the waveform 606.
A third method that may be used to train LLI subjects to distinguish short duration acoustic events is to provide frequency sweeps of varying duration, separated by a predetermined interval, as shown in Figure 7.
More specifically, an upward frequency sweep 702, and a downward frequency sweep 704 are shown, havin~~
3~ duration's varying between 25 and 80 milliseconds, and separated by an inter-stimulus interval (IS1) of between 500 and 0 milliseconds. The duration and frequency of the sweeps, and the inter-stimulus interval between the sweeps are varied depending on the processing level of the LLI subject, as will be further described below.
SUBSTtTUTE SHEET (RULE 26) Each of the above described methods have been combined in a unique fashion by the present invention to provide an adaptive training method and apparatus for training subjects having abnormal temporal processm_ abilities to recognize and distinguish short duration acoustic events that are common in speech. The present invention is embodied into a computer program entitled Fast ForWord by Scientific Learning Corporation. The computer program is provided to an LLI subject via a CD-ROM which is input into a general purpose computer such as that described above with reference to Figure I. In addition, a user may lo; onto a server. via an Internet connection. for example, to upload test results, and to download training parameters for future exercises. Specifics of the present invention will now be described with reference to Figures 8-30.
Referring first to Figure 8, a pictorial representation is shown of a game selection screen 800. The game selection screen 800 is similar to that provided to an LLI subject upon initialization of the computer program according to the present invention. The game selection screen 800 includes the titles of seven computer games that provide distinct training exercises for improving speech recognition in subjects who abnormally process temporal acoustic events, and for building, or rebuilding the neurological connections necessary to accurately process phonemes at the rates common in speech. The game titles include: 1) Old MacDonald's Flying Farm;
IS 2) Block Commander; 3) Circus Sequence; 4) Phonic Match; 5) Phonic Words;
6) Phoneme Identification: and 7) Language Comprehension Builder. Each of these games will be discussed in greater detail below.
When a subject begins execution of the Fast ForWord computer program. he/she is presented with a screen similar to the screen 800. More specifically, upon initiation of the program, the subject is presented with a screen that lists the subjects that are currently being trained by the program. The subject then selects his/her name from the list. Once the subject has selected his/her name, a screen similar to 800 appears, typically listing one of the seven programs, according to a training schedule that is dictated by the program, or is modified by an instructor. The order of the games, and the selection of which one of the seven games that is presented in the screen 800 varies from day to day. The subject then elects to play the first game listed according to the training schedule prescribed for the subject.
In one embodiment, a training schedule is provided by a certified Speech and Language Professional (SLP), and the SLP oversees each training session according to the schedule.
An exemplary schedule requires a subject to cycle through five of the seven games for an hour and fom minutes, five days per week, for approximately six weeks. in addition, the schedule typically requires that a subject play Circus Sequence and Language Comprehension Builder everyday, alternating the other games so that they are played approximately the same amount of time.
In an alternative embodiment, the game schedule specified by an SLP at a remote server, and the daily parameters of the schedule are downloaded to the subject's computer, either daily or weekly. The schedule can be optimized over the course of the training program to first develop skills required for subsequent more advanced skills. It can also be used to help manage time in each game so that all of the games are completed at W about the same time at the end of the training program. This embodiment allows a subject to obtain the benefits of the Fast ForWord program, and the oversight of a certified SLP, regardless of his/her geographic location.
One skilled in the art will appreciate that the training schedule could either be provided in a window on the SUBSTITUTE SHEET (RULE 26) subject's computer, or could actually control the game selection screen to prompt the user only for those games required on a particular day.
Once a subject selects a particular dame, he/she is taken into that particular game's module. Alternatively.
once the subject selects his/her name from the list, the particular games may be presented, in a predefined order.
without requiring the subject to first select the game. For ease of illustration, each of the seven games will be discussed. in the order represented in Figure 8.
Referring to figure 9, a scene 900 is shown for the first game in the program.
Old MacDonald's Flyins:
Farm (OMDFF). OMDFF uses a psychophysical procedure called limited-hold reaction time. A subject is asked to start a trial, in this case by grabbing a flying animal, at which point the game begins presenting a distractor phoneme that is modified in the time domain only. More specifically.
information bearing acoustic elements whose temporal location within a phoneme carry important cues for phoneme identification are modified by stretching the acoustic elements in time, say to 150% of their normal duration. The acoustic elements that are stretched include voice onset time (VOT) between consonant and vowel events, as well as fricative-vowel gaps.
The inter-stimulus interval (ISI) between presentations of the distractor phoneme is set initially to SOOms. The 1 ~ distractor phoneme is repeated a random number of times, usually between 3 and 8 times, before the target tone is presented. The target phoneme has normal temporal acoustic parameters. The subject is asked to continue to hold the animal until the target phoneme is presented. When the subject hears the target phoneme, the subject is to release the animal. If the subject accurately hears the target phoneme and releases the animal within a desired "hit" window, then his/her score increases. If the subject misses the target phoneme, the animal flies away and no points are given. As the subject improves, the temporal parameters of the distractor phonemes are reduced in time to that of normal speech, and the ISI is reduced, systematically to 300ms.
A number of scenes are provided in OMDFF, each correlated to a specific pair of sounds. The correlation of sound pairs to farm scenes is shown below:
Sound Pair ne /Gi/ - /Ki/ Bam /Chu/ - /Shu/ Mudpit /Si/ - /Sti/ Garden /Ge/ - /Ke/ House /Do/ - /To/ Coop So, when a subject grabs the flying animal, the game begins presenting a tone pattern such as: /Si/ ... /Si:' 35 ... /Si/ ... /Si/ ... /Stil. When the subject hears /Sti/, the subject is to release the animal.
The scene 900 provides a general farmyard background with three elements that persist across al) the scenes. The elements are the score digits 906, the stop sign 908, and the tractor 910. The tractor 910 acts as a progress creature to graphically indicate to a subject their progress during a dame. If the subject gets a correct response, the tractor 910 advances across the screen 900, from right to left.
The score digits 906 display the subject's current score. The stop sign 908 is common to all seven games, and provides a subject with a means for exiting the game. and then the program.
SUBSTITUTE SHEET (RULE 26) Also shown on the screen 900 are a flying farm animal 902. and a selection hand 90.1. In this scene, the flying farm animal 902 is a cow with a rocket pack. Other scenes provide different farm animals propelled through the air with differem flying apparatus, Operation of the game OMDFF
will now be described with reference to Figure 10.
In Figure 10, a flow chart 1000 is provided that illustrates operation of the OMDFF game. The game begins at block 1002 and proceeds to block 1004.
At block 1004, the computer program selects a particular tone sequence to be played for a subject. For example, the program would select the tone pair ISii ... .'Sti/, stretched 150%, with an 1S1 of SOOms. The tone pair that is selected, the stretching, and the IS1. are all associated with a particular skill level. And, the skill level that is presented to a subject is adapted in real time, based on the subjects ability to recognize the target phoneme, as will be further described below. However, the initial phoneme pair, stretching and ISI are chosen to allow an LLl subject to understand the game, and to begin to distinguish phonemes common in speech. Upon selection of a particular phoneme sequence, and skill level. flow proceeds to block 1006.
At block 1006, the game presents a flying animal 902. As mentioned above, the animal 902 that is I S presented varies according to which of the phoneme pairs are selected. if the animal 902 is a flying cow, the phoneme pair that will be presented is /Gi/ ... /Ki/. The animal 902 continues to fly around the screen until the subject places the selection hand 904 over the animal 902, and holds down a selection button, such as a mouse button. After the animal 902 is presented, flow proceeds to decision block 1008.
At decision block f008, a test is made as to whether the subject has selected the animal 902. If not, flow proceeds to block 1010 where the animal 902 continues to fly. The animal 902 will continue moving about the scene 900 until it is selected. Flow then proceeds to block 1012.
At block 1012, the program begins presenting the selected phoneme sequence.
More specifically. an audio formatted file is called by the program that is to be played by a computer, either through speakers connected to the computer, or though headphones worn by a subject. In one embodiment. the file is a Quicklime audio file, configured according to the parameters necessary for the skill level of the user, i.e., phoneme pair, stretching, and ISI. In addition, a starting point in the file is chosen such that the distractor phoneme is presented a random number of times, between 3 and 8 times, before the target phoneme is presented. After the phoneme sequence begins playing, flow proceeds to decision block 1014.
At decision block 1014, a determination is made as to whether the subject has released the animal 902. If the subject has not released the animal 902, a parallel test is made, shown as decision block 1016.
Decision block 1016 tests whether a "hit" window has passed. More specifically, the program contains a lockout window of 200ms that begins when the target phoneme is played. It is believed that if the subject releases the animal 902 within 200ms of the target phoneme beginning play, it is merely coincidental that he/she would have heard the target phoneme. This is because no subject's reaction time is quick enough release the animal 902 so soon after hearing the target phoneme. The start of the "hit"
window begins after the lockout window, i.e.. 200ms after the target phoneme begins. The end of the hit window is calculated as the start of the hit window, plus the length of one phoneme letter. So, at decision block 1016, if the hit windows has not SU8ST1TUTE SHEET. tRULE 26) passed. the computer continues to test whether the subject has released the animal 90?. If the hit window has passed. and the subject has not released the animal 902, flow proceeds to block 1026.
At block 1026, a miss is recorded for that test. After recording the miss, flow proceeds back to block 1021.
At block 1021, the skill level for the selected phoneme sequence is decreased, as will be further described below. Flow then proceeds back to block 1006 where another flyins animal is presented for the same phoneme sequence.
At decision block 1014, if it is determined that the subject has released the animal 902. instruction flow proceeds to decision block 1018.
At decision block 1018, a determination is made as to whether the hit window has begun. That is, did the subject release the animal 902 during or before the lockout period? If the hit window has not begun, instruction flow proceeds to block 1020.
Block 1020 records a false alarm and instruction flow proceeds to block 1021.lt should be appreciated that a false alarm is recorded, rather than a miss, because it suggests that the subject detected a change in the 1 ~ phoneme sequence when a change has not yet occurred. If, at decision block 1018, the hit window has begun, flow proceeds to decision block 1022.
At decision block 1022 a determination is made as to whether the hit window has passed. If the hit window has passed, prior to the subject releasing the animal 902, then flow proceeds to block 1026 where a miss is recorded, as described above. However, if the hit window has not passed flow proceeds to block 1024.
At block 1024, a hit is recorded for the subject. That is, the subject has correctly heard the target phoneme, and has released the animal 902 in an appropriate time frame. Flow then proceeds to decision block 1028.
At decision block 1028, a determination is made as to whether the subject has heard the target phoneme.
and released the animal 902 within the hit window, three times in a row. If not, then flow proceeds back to block 1006 where another animal 902 is presented. If the subject has responded correctly, three times in a row, 3~ flow proceeds to block 1030.
At block 1030, the skill level for the selected tone sequence is increased by one level. In one embodiment, 18 skill levels are provided for each phoneme sequence. As mentioned above.
the skill levels begin temporal modifications of the phonemes, and by separating the presented phonemes with an ISI of SOOms. As the subject's ability to distinguish between the distractor and target phonemes improves, the temporal modifications of the phoneme is reduced to that of normal speech, and the ISI is reduced to 300ms. One skilled in the art will appreciate that the degree of phoneme temporal manipulation, from I50% to 100%, the variation of ISl among the skill levels, and the number of skill levels provided, may vary depending on the LLI subject and the type of training that is required. In one embodiment, after a subject successfully passes a phoneme sequence with I 50%
time modification, and an ISI of SOOms, the next skill level presented holds the time modification at 150%, but 3 ~ reduces the iSl to 400ms. Flow then proceeds to decision block 1032.
At decision block 1032 a determination is made as to whether the maximum level has been reached for the selected phoneme sequence. That is, has the subject progressed through all the skill levels to the point that they SUBSTITUTE SHEET (RUSE 26) are correctly recognizing a target phoneme with a duration of 100°,r.
and with an ISI of Oms? If not. then flow proceeds to block 1006 where the animal 902 is again presented to the subject.
this time, at an increased skill level. However, if the subject has reached the maximum level for a particular phoneme sequence, flow proceeds to block 1004 where a phoneme tone sequence is selected. if a subject has not yet played the new phoneme sequence that is selected, the skill level is set to the easiest level.
However. if the subject has previously heard the new phoneme sequence, the level of play begins, either at or below the last skill level obtained. typically s skill levels below what was last obtained.
Selection of phoneme sequences and skill levels are performed by the program to insure that a subject is exposed to each of the phoneme pairs, but spends the greater portion of his/her time with those pairs that are the most difficult to distinguish. In addition, the number of recorded hits/misses/false alarms and reaction times are recorded for each level, and for each phoneme pair, on a daily basis. The records are then uploaded to a remote server where they are either reviewed by a remote SLP, or are tabulated and provided to a local SLP. The SLP
then has the option of controlling the selection of phoneme sequence selection, and/or skill level. accordine to the particular needs of the subject. or of allowing automatic selection to occur in a round robin manner.
I S While not shown, the program also keeps track of the number of correct responses within a sliding window. This is visually provided to a subject by advancing the tractor 910, from the right to the left. for each correct response. After 10 correct responses, creative animations are played, and bonus points are awarded, to reward the subject and to help sustain the subject's interest in the game. Of course, the type of animation presented, and the number of correct responses required to obtain an animation are variables that may be set by an SLP.
Now referring to Figure 11 a screen I 100 is shown of the second game in the Fast ForWord program.
entitled Block Commander. The Block Commander game presents a subject with audio prompts, directing the subject to perform an action. An exemplary action might be ''point to the green circle." The types of prompts are grouped according to difficulty, requiring a subject to perform increasingly sophisticated tasks, depending on their skill level. if the subject responds correctly he/she is awarded a point. Otherwise, the cursor hand turns red and demonstrates how the command should have been performed. This feedback allows the subject to learn from the computer the more difficult manipulations that are required. In addition, the prompts are digitalh processed by stretching the speech commands {in the time domain), and by emphasizing particular frequency envelopes in the speech, that contain time modulated acoustic components.
The screen 1100 contains a number score 1 102 and a stop sign 1 104. The number score 1 l02 provides visual feedback to a subject regarding their progress in the game, and the stop sign 1 104 provides a selection mechanism for ending the game. Also shown is a cat I 106. The cat 1106 provides animations for a subject during training. A grid 1120 is shown, in a 55 degree perspective, upon which are placed 3D tokens, further described below. In the center of the grid 1 120 is an ear/hand button 1 108.
When a subject places a hand selector I 1 10 on top of the earihand button 1 108, and selects the icon (by pressing a mouse key), then a trial in the Block Commander game begins. This is shown in Figure 12, to which attention is now directed.

SU85TTTUTE SHEET (RULE 25) In Figure 12, a screen shot 1200 is shown that includes the stop sign, number score. and grid, as shown above. In addition. a row of different colored squares l'?02, and a row of different colored circles 1204 are provided. Use of the squares 1202 and the circles 1204 will be described below with reference to Figure 13.
Also shown are a number of progress tokens 1206 at the bottom of the screen 1200. The progress tokens 1206 s indicate the number of correct answers within a particular instance of the game. In one embodiment. after S
tokens 1206 are shown, indicating 5 correct responses, a reward animation and bonus points are provided to the user.
Now referrins to Figure 13, a flow chart 1300 is shown that illustrates operation of the Block Commander game. Execution begins at block 1302 and proceeds to block 1304.
At block 1304 the game selects the first playing level that is to be presented to a subject. To the right of block 1304 is a table 1330 that illustrates the 5 processing levels that are used in the Block Commander game.
The levels are distinct from each other in terms of the amount of stretching (in the time domain) that is used on speech, and the amount of emphasis that is applied to selected frequency envelopes within the speech. Flow then proceeds to block 1306.
I S At block 1306, the game presents a program to a subject that trains the subject to play the game. The training portion consists of 3 rounds. The first round trains the subject to distinguish between object sizes, e.g., large and small. The second round trains the subject to distinguish between object shapes, e.g., square and circle. The third round trains the subject to distinguish between object colors, e.g., blue, red, yellow, green and white. More specifically, the prompts given to a subject during training are:
Size Round I Touch the large circle Touch the small circle Touch the large square Touch the small square Shape round 2 Touch the square Touch the circle Color round 3 Touch the blue square Touch the red square Touch the yellow square Touch the green square Touch the white square For a subject to pass any of the training rounds, and progress to the next training round. two correct hits are required for each command prompt, with no errors. if an error is made, the score is reset. and play for that round starts over. All of the prompts for the training rounds are at processing level 1, 150% duration and 20dB
emphasis. After a subject has completed the training program heishe will not see it again. Upon completion of the training program, flow proceeds to decision block 1308.

SU8ST1TUTE SHEET (RULE 26) At decision block 1308 a determination is made as to whether the training has been completed. !f not, then flow proceeds back to block 1306 where training continues. If training has been completed, flow proceeds to block 1310.
At block 1310, a warm up exercise is presented to a subject. The warm up exercise is presented each time a user plays the game. at the speech processing level that was last completed.
The warm up round includes the following prompts:
Warm up Touch the green circle Touch the yellow square Touch the blue square Touch the white circle Touch the red circle Touch the blue circle Touch the green square Touch the yellow circle Touch the red square Touch the white square The ordering of the prompts is random each time the warm up is played. After presentation of each of the prompts flow proceeds to decision block 1312.
At decision block 1312, a determination is made as to whether the warm up round has been completed. If not, then flow proceeds back to block 1310 where the warm up continues.
Otherwise. flow proceeds to block 1314.
At block 1314, an appropriate processing level is selected for a subject. The first time a subject plays the Block Commander game, processing level I is selected. However, after the subject has progressed beyond processing level I, the level selected will be the level that the subject last played. Flow then proceeds to block li 1316.
At block 1316, the first round of the game is presented to a subject. As mentioned above, in one embodiment of the Block Commander game, six rounds are provided. The rounds are as follows:
Round 1 Touch the sreen circle Touch the yellow square Touch the blue square Touch the white circle Touch the red circle Touch the blue circle Touch the =reen square Touch the yellow circle Touch the red square SU85 T 1TUTE SHEET (RULE 26) Touch the yellow square Round 3 Touch the small green circle Touch the large red circle Touch the large white circle Touch the large red square Touch the small yellow circle Touch the large green circle Touch the large green square Touch the small white circle Touch the smatl blue square Touch the laree ereen circle Round 3 Touch the white circle and the blue square Touch the blue square and the red circle Touch the red square and the green circle Touch the green square and the blue square Touch the yellow circle and the red circle Touch the red square and the green square Touch the red square and the yellow circle Touch the white square and the red circle Touch the green circle and the green square Touch the blue sguare and the yellow circle Round 4 Touch the small green circle and the large yellow square Touch the small red square and the small yellow circle Touch the large green square and the large blue circle Touch the large red square and the large blue square Touch the small red square and the small green circle Touch the small white circle and the small green circle Touch the large red square and the large white square Touch the large green circle and the large red circle Touch the small blue square and the small white circle Touch the small yellow sguare and the laree blue square Round 5 Put the blue circle on the red square Put the green square behind the white circle Touch the green circle with the blue square Touch - with the green circle - the blue square Touch the green circle and the blue square SUBSTITUTE SHEET. (I~U~E 26) Touch the green circle or the blue square Put the white square away from the yellow square Put the yellow square in front of the red square Touch the squares, except the yellow one Round 6 Put the white square beside the red circle Put the blue circle between the yellow square and the white square Except for the blue one, touch the circles Touch the red circle - No! - the green square Instead of the yellow square, touch the white circle Together with the yellow circle, touch the green circle After touching the yellow square, touch the blue circle Put the red circle underneath the yellow square Before touching the white circle, touch the blue square Each of the prompts are presented to the user in random order, but successful completion of each of the prompts in a round is required before a round is considered complete. After a first prompt is provided to a subject. flow proceeds decision block 1318.
At decision block 1318, a determination is made as to whether there have been 90°ro correct responses in a ~ sliding group of 5 items. If not, then flow proceeds back to block 1316 where another prompt in a round is provided. if there have been 90% correct responses, as will be illustrated by ~ progress tokens at the bottom of the screen, then flow proceeds to block 1320.
At block 1320, the subject is shown a reward animation. In one embodiment, the animation consists of characters morphing out of the blocks on the board. Flow then proceeds to decision block 1322.
At decision block 1322, a determination is made as to whether the round is complete. A round is complete when a subject successfully responds to all of the prompts in the round. If the round is not complete, flow proceeds back to block 1316 where another prompt is provided to the subject.
If the round is complete, flow proceeds to decision block 1324.
At decision block 1324, a determination is made as to whether all six rounds within the game have been completed. If not, then flow proceeds to block 1326 where the round level is incremented. Flow then proceeds back to block 1316 where prompts for the new round are presented. If decision block 1324 determines that all rounds have been completed, flow proceeds back to block 1314 where an appropriate skill level is selected. In one embodiment, if a subject successfully completes alt six rounds, at skill level I (150% duration, 20dB
emphasis), he/she will progress to skill level 2 (125% duration, 20dB
emphasis).
The Block Commander program begins by providing a subject with a number of simple commands.
stretched in time, with particular emphasis given to phoneme components that are difficult for an LL1 subject to understand. As the subject correctly responds to the simple commands. the commands increase in difficulty.
Once the subject masters the more difficult commands, the amount of stretching, and the amount of emphasis is SU8ST1TUTE SHEET (RULE 26) reduced. and the process is repeated. The rounds continue, over the course of days and weeks, until the subtect is correctly respondin_ to the difficult commands at skill level ~, which is normal speech.
One skilled in the an will appreciate that the commands cause the subject, not only to understand the phonemes that are presented, but also to apply logical reasoning to the more difficult commands, and to recall the constructs of the commands. The requirement that the subject recall the command constructs is directed at improving the subjects memory, as well as to improving their abiliy to process acoustic events. It is believed that the games repetitive nature, that trains the subject's neurological connections to process speech, is also helpful in improving the subject's memory, and his/her cognitive skills in understanding linguistic relationships.
Now referring to Figure 14, a screen shot 1400 is shown for the third game in the Fast ForWord program.
entitled Circus Sequence. The Circus Sequence game trains a subject to distinguish between upward and downward frequency sweeps that are common in the stop consonant portion of phonemes, by varying the duration and frequency of the sweeps, and by varying the inter-stimulus interval (ISI) between presentation of the sweeps.
The screen 1400 contains a number score 1402, a stop sign 1404, and a progress element 1406. all within a circus ring environment. In addition, the screen 1400 contains a hand selector 1408, and an earihand button 1410. As in the Block Commander game, a user begins a test by selecting the ear/hand button 1410 with the hand selector 1408.
Referring to Figure l5, a screen shot 1500 is shown that illustrates two elements 1502. 1 X04 that are presented to a subject after the ear/hand button 1410 is selected. The left element 1502 pertains to an upward frequency sweep, and the right element 1504 pertains to a downward frequency sweep. In addition, a progress element 1506 is shown elevated above the circus ring floor, to indicate that a subject has correctly responded to a number of tests. Game play will now be illustrated with reference to Figure 16.
Figure 16 provides a flow chart 1600 that illustrates program flow through the training portion of the Circus Sequence Game. Training begins at block 1602 and proceeds to block 1604.
At block 1604, the program begins presenting a random sequence of frequency sweeps to a subject. All sweep sequences are of the form: up-up; up-down; down-up; or down-down. Thus, if the program presents the sweep sequence "up-up", a subject is to click on the left element 1502 twice.
If the program presents a sweep sequence "down-up", the subject is to click on the right element 1504, then on the left element 150?. So, once the program provides a sweep sequence to the subject, the subject selects the elements corresponding to the frequency modulated (FM) tone sequence. If the subject is correct, he/she is awarded points, the progress element 1506 advances upwards, and the ear/hand button 1410 is presented, allowing the subject to begin another test. During training, all upward sweeps are presented starting at 1 kHz and all downward sweeps ending at I kHz, with upward/downward sweeps at 16 octaves per second. The duration of the sweeps are 80ms.
and the sweeps are separated by 1000ms. Research has shown that most LLI
subjects are capable of distinguishing between frequency sweeps of this duration, and having an ISI of 1000ms. After each sweep sequence is presented. flow proceeds to decision block 1606.

SU8ST1TUTE SHEE'~' (RULE 26~

At decision block 1606, a determination is made as to whether the subject has correctly responded to 80°~0 of the trials over a sliding scale of the last ten trials. If not, then flow proceeds back to block 1604 where the sequences continue to be presented. If the subject has correctly responded 80°~0 of the time. flow proceeds to block 1608.
At block 1608, random sequences are again presented, at 1 khz. having a duration of 80ms and an !S1 of I OOOms. Flow then proceeds to decision block 1610.
At decision block 1610, a determination is made as to whether the subject has correctly responded to 90°ro of the trials over a sliding scale of the last ten trials. If not, then flow proceeds to decision block 1612. if the subject has correctly responded to 90% of the trials over a sliding scale of the last ten trials, flow proceeds to block 1614.
At decision block 1612, a determination is made as to whether a subject has correctly responded to less than 70% of the trials, over a sliding scale of the last 20 trials. If not, indicating that he/she is responding correctly between 70-90% of the time, then flow proceeds back to block 1608 where the sweep sequences continue to be presented. If a determination is made that the subject is correctly responding less than 70% of the time over the last 20 trials, then flow proceeds back to block 1604, where the training begins again.
At block 1614, a 3-up, 1-down rule begins. This rule allows a subject to advance in difficulty level every time 3 correct responses are provided, while reducing the level of difficulty any time an incorrect response is given. Research has shown that a 3-up, I-down rule allows a subject to obtain a correct response rate of approximately 80% near threshold, which is desired to motivate and encourage the subject to continue. A
reduced accuracy rate discourages a subject, a situation that is not desired especially if the subject is an LLI
child. Once the 3-up, I-down rule is started, flow proceeds to decision block 1616.
At decision block 1616, a determination is made as to whether a subject has responded correctly the last 3 tests. If so, then flow proceeds to block 1620. If not, then flow proceeds to decision block 1618.
At decision block 1618, a determination is made as to whether a subject has incorrectly responded to the last test. If not, then flow proceeds back to decision block 1616 where another test is provided. However, if the subject has incorrectly responded to the last test, the difficulty level is reduced one level, and flow proceeds back to decision block 1616 where another test is presented. During the training level, all tests are performed at 80ms duration, with I OOOms ISI, which is the easiest skill level. Therefore, if the subject incorrectly responds at that level, no change in difficulty is made.
At block 1620, the skill level is increased. During training, the sweep sequences are presented at lkhz, with 80ms duration, but the ISI is reduced between the sweeps each time the level is incremented. In one embodiment. the ISI levels start at 1000ms, and proceed through 900ms, 800ms.
700ms. 600ms and SOOms.
Flow then proceeds to decision block 1624.
At decision block 1624, a determination is made as to whether the ISI is at SOOms. if not, then flow proceeds back to decision block 1616 where sweep sequences continue to be presented. If the ISl is SOOms, the training session ends and the subject is allowed to enter the real game, at block 1626.

SU85T1TUTE SHEET (RULE 26) Referring now to Figure 17, a flow chart 1700 is provided that illustrates operation of the Circus Sequence game. after the training session has been completed. The game begins at block 1702 and proceeds to block 1704.
At block 1704, an appropriate skill level is selected. The skill levels used by Circus Sequence are shown in table 1730. For each of three frequencies: SOOhz, 1 khz, and 2khz. a number of skill levels are provided. The skill levels begin by presenting frequency sweeps having a duration of 80ms.
and an ISI between the sweeps of SOOms. As a subject advances. the ISI is reduced, either to Oms, or in one embodiment, to 125ms. !t should br appreciated that the ISI increments used should be selected to slowly train a subject's ability to distinguish between similar phonemes, such as /ba/ and /da/, while not frustrating the subject by training beyond levels required to distinguish between such phonemes.
When a subject first plays Circus Sequence, after passing training, he/she is provided with frequency sweeps beginning at lkhz. having 80ms duration and an ISl of SOOms. On subsequent days, the frequency that is selected is random. and can be either SOOhz, I khz or 2khz. Once the appropriate skill level has been selected.
flow proceeds to block 1706.
I ~ At block 1706, a tone sequence is presented, according to the selected skill level. Flow then proceeds to decision block 1708.
At decision block 1708. a determination is made as to whether the subject has correctly responded to the last 3 trials. If not, then flow proceeds to decision block 1710. If the subject has correctly responded to the last 3 trials, flow proceeds to block 1712.
At decision block 1710, a determination is made as to whether the subject has incorrectly responded to the last trial. If not, then flow proceeds back to block 1706 where another tone sequence is presented. If the subject incorrectly responded to the last trial, flow proceeds to block 1714.
At block 1714, the skill level is decremented. If the skill level has an ISl of SOOms, no decrease is made.
However, if the skill level has an ISI that is less than SOOms. the difficulty is reduced 1 level. For example. it' 2~ the subject incorrectly responds to a trial having an 1SI of 180ms, for example, the difficulty level will be reduced, so that the next tone sequence will have an ISI of 185ms. Flow then proceeds back to block 1706 where another tone sequence is presented.
At block 1712, if the user has correctly responded to the last 3 trials, the skill level is incremented. For example, if a subject is at a skill level with a sweep duration of 80ms and an ISI of 250ms, the skill level will increase such that the ISI for the next tone sequence will be 200ms. Flow then proceeds to decision block 1716.
At decision block 1716, a determination is made as to whether the ISI is at ISOms. if not, then flow proceeds to decision block 1720. If the ISI is at 150 ms, flow proceeds to block 1718.
At block 1718, the next lower duration is enabled. This allows the program to simultaneously trial a subject with multiple sweep durations, once the subject is successfully responding at an ISI level of I SOms. For 3s example, if a subject is correctly responding to tone sequences of duration 80ms. with an 1S1 of 150ms, then testing continues at 80ms. In addition, testing is begun with sweep sequences of duration 60ms, at an IS1 of SOOms. Flow then proceeds to back to block 1706 where another tone sequence is presented. This allows the SU8ST1TUTE SHEET' (RULE 26a program to present tone sequences of different duration. and different IS1.
while tracking, progress for each duration.~ISI combination.
At decision block 1720, a determination is made as to whether the subject has reached a training threshold.
In one embodiment, a training threshold is reached when the subject has had eight skill level reversals within six skill levels of each other. if such a threshold is reached. flow proceeds to block 1721. Otherwise. flow proceeds to decision block 1722.
At block 1721. the program moves the subject to the next frequency category to be tested. tt is believed that once a threshold has been met on a particular day, the subject should not continue being tested at the same frequency. Thus. the program allows a subject to progress, either to an 1S1 of Oms (or some other minimal 1S1) or to a threshold at one frequency, and then begin testing at an alternative frequency. Flow then proceeds back to block 1706.
At decision block 1722, a determination is made as to whether the ISI for a particular tone duration is Oms.
If not, then flow proceeds back to block 1706 where another sweep sequence is presented. However. if a subject has reached a skill level of Oms ISI for a particular duration. flow proceeds to block 1724.
IS At block 1724. the program deletes the duration associated with the Oms ISI
from the trial. This is because testing at that level is no longer required by the subject due to their proficiency. However. as mentioned above, an alternative embodiment may select an lSl of greater than Oms as the point where the duration is deleted from the game. Flow then proceeds back to block 1706 where more tone sequences are presented.
While not shown, in one embodiment, a threshold level is provided that causes the game to begin testing a subject at an alternate frequency. For example, if the subject is testing at SOOhz, and a threshold is reached, the program will begin testing the subject at 2khz. The threshold is reached when a subject has 8 skill level reversals within 6 levels of each other. When this occurs, the program ceases testing at the frequency for which the threshold was reached, and begins testing at an alternative frequency.
Atso, when a subject begins each day of testing. a frequency different than that tested the previous day is begun. Moreover, a skill level that is 5 less than completed the previous day is chosen, presuming the subject completed at least 20 trials for that frequency.
As mentioned above, each correct response causes the progress element 1506 to advance upward. After ten correct responses, a reward animation is provided to entertain the subject. When the animation ends, the subject is prompted with the ear/hand button 1410 to begin another trial.
Now referring to Figure 18, a screen shot 1800 of the fourth game in Fast Forward, Phonic Match, is provided. The screen 1800 includes a set of pictures 1802. a progress creature 1804, a stop sign 1806, and a number score 1808. The progress creature 1804, stop sign 1806 and number score 1808 function similarly to those described in previous games.
The set of pictures 1802 are arranged into a 2x2 grid. When a subject selects any of the pictures, a word or phoneme is played. On any grid, there are two pictures that play the same word. Thus, for a 2x2 grid. there are two words that will be presented. The test for the subject is to distinguish between similar words, to recall which picture is associated with which word, and to sequentially select two pictures that present the same word.

SUBSTITUTE SHEET (RULE 25~

Similar words are presented together. m ith the words processed according to the processing levels shown in table 1902 of Figure 19.
Initially, subjects are presented words at processing level I, with a duration of l~0°°, and having 20dB
emphasis of selected frequency envelopes within the words. In addition.
different skill levels, as shown in table 1904. are provided that increase the ~~rid size for a particular trial, and set the maximum number of clicks. or selections, that a subject can attempt before losing the trial. Operation of the came is illustrated in Figure 20.
However, before providing a detailed description of game operation, the words used in the game are shown.
Word Group 1 big, bit, dig, dip, kick, kid, kit, pick, pig, pit. tick, tip Word Group 2 buck, bud, but, cup, cut duck. dug, pub, pup, tub, tuck, tug Word Group 3 back, bag, bat, cab, cap, cat, gap, pack, pat, tack, ta;, tap Word Group 4 ba, cha, da, ga, ka, la, pa, ra, sa. sha, ta, za Referring now to Figure 20, the Phonic Match game begins at block 2002, and proceeds to block 2004.
At block 2004, a 2x2 grid is presented. The words associated with the 2x2 grid are selected from one of the four Word Groups shown above. The selection of the Word Group is random, except that tracking of previously played Word Groups is done to insure that all Word Groups are equally represented, and that a subject is not provided the same Word Group as played on an immediately preceding day. The words within a Word Group are typically selected according to their acoustic similarity.
The subject is required to sequentially select two pictures that have the same word associated with them.
1 ~ When a subject sequentially selects two pictures associated with the same word, the pictures are removed from the gird being played. After a subject completes a 2x2 grid, whether correctly or incorrectly, flow proceeds to decision block 2006.
At decision block 2006, a determination is made as to whether the subject has successfully passed three 2x2 grids. Referring to table 1904 of Figure 19, ten skill levels are shown.
When a 2x2 grid is first presented, the skill level entered is level 8. Skill level 8 defines a 2x2 grid, with a maximum number of allowed clicks as 8. if a subject selects pictures on a 2x2 grid more than 8 times, the grid is not considered passed, and game flow proceeds back to block 2004 where another grid is presented. If not. then flow proceeds back to block 2004 where another 2x2 grid is presented with words from the same Word Group. If the subject has successfully passed three 2x2 grids, thus progressing from level 8 through level 10, flow proceeds to block 2008.
At block 2008, a new grid is presented for a particular Word Group, or stimulus set. Initially, a 3x3 grid is provided, at skill level 2. The maximum number of clicks allowed for a subject to pass a 3x3 grid is 20. Within a 3x3 grid, 1 of the pictures is a wildcard, since there are an odd number of pictures. Selection of the wildcard simply removes the picture from the grid. and does not count against the subject as a selection, or click. After a 3x3 grid is presented to a subject, flow proceeds to decision block 2010.
At decision block 2010, a determination is made as to whether the subject passed the level. That is, did the subject properly distinguish between word pairs, and sequentially select picture pairs associated with words in 20 or less clicks. If so, then flow proceeds to block 2012. If not, then flow proceeds to block 2014.

SUBSTtTUTE SHEET (RULE 2B~
*rB

At block 2013, the skill level is incremented. For example, if a subject was at level ?. hershe will increment to level 3. Note: levels 2-3 present a 3x3 grid with a maximum number of clicks of 30, while levels 4-7 present a 4x4 grid with a maximum number of clicks of 60. Once the skill level is incremented, floes proceeds to block 2020.
At block 2020, a grid according to the new skill level is presented. The grid is associated with the same Word Group that was previously used, but possibly with different words from the group. Flow then proceeds to decision block 2022.
At decision block 2032. a determination is made as to whether the subject has passed the level. That is, did the subject correctly associate the word pairs in less than or equal to the number of allowed clicks. If not, flow proceeds to block 2014. if the subject passed the level, flow proceeds to decision block 2024.
At decision block 2024, a determination is made as to whether the subject has reached skill level 7. Level 7 is termed the "decision" level. If the skill level that has just been passed is not level 7, then flow proceeds back to block 2012 where the skill level is incremented. However, if the skill level passed is level 7, flow proceeds to decision block 2026.
At decision block 2026, a determination is made as to whether all four stimulus sets, or Word Groups have been passed. If not, then flow proceeds to block 2018. However, if a subject has correctly passed skill level 7, for all four Word Groups, flow proceeds to block 2028.
At block 2028, the next processing level is selected. Referring to table 1902 of Figure f 9, a subject begins at processing level 1 (duration 150%, emphasis 20dB). Once all four Word Groups have been passed at skill level 7, the amount of audio processing to the words is reduced. First, the duration of the words is reduced, from 150%, to 125%, to 100%, and then the amount of emphasis applied to selected frequency components is reduced, from 20dB, to IOdB, to OdB. Once a subject has reached processing level 5, he/she is presented with normal speech. After the next processing level is selected, flow proceeds to decision block 2030.
At decision block 2030, a determination is made as to whether all processing levels have been completed.
That is, has the subject reached processing level 5. If not, flow proceeds back to block 2004 where the game begins anew, with a 2x2 grid, but at the new processing level. However, if the subject has reached processin~_ level 5, flow proceeds to block 2032.
At block 2032, a 5x5 grid is provided, with a maximum number of allowable clicks as 90. From this point forward. the game continues playing indefinitely, but the decision round, level 7, switches from a 4x4 grid to a 5x5 grid.
Referring back to decision block 2022, if a subject does not pass a particular level, flow proceeds to block 2014.
At block 2014, the skill level is decremented. Flow then proceeds to decision block 2016.
At decision block 2016. a determination is made as to whether the new skill level is less than level I.
Level I is considered a "slip" level indicating that if a user failed at this level. a new Word Group should be provided. If the skill level is not less than 1, flow proceeds back to block 2020 where a new grid is presented, SUBSTITUTE SHEET (RULE 26) according to the present level. If the new level is less than I, that is. if the subject failed to pass a erid. at skill level 1, flow proceeds to block 2018.
At block 2018, the program discontinues presenting words from the present W
ord Group, and chances the Word Group used for the grids. Flow then proceeds back to block 2008 where a 3x3 grid is presented. at skill level 2. using words from the new Word Group.
The flow chart 2000 demonstrates that a subject is required to proceed from level 2 through level 7 for each of the four Word Groups, at a particular processing level, before he/she is allowed to advance to the next processing level. The progress creature descends with each click. if the creature reaches the bottom, then the grid is not passed. If all picture pairs are matched prior to the creature reaching the bottom, extra points are awarded, a reward animation is presented and the grid is considered passed.
When a subject has correctly selected a predetermined number of picture pairs, the progress animal 1804 reaches the top, and the subject is rewarded by an animation.
Referring now to Figure 21, a screen shot 2100 is shown illustrating the fifth game in the Fast Forward program, entitled Phonic Words. Phonic Words presents a subject with a sentence prompt that requires the subject to distinguish between two similar words, to accurately select one of two pictures 2108, 2110, using a selection hand 2112. The table below provides a list of the word pairs used.
The first word in the pair is always the correct answer, but its representational image could appear on the left or right of the screen 2100.
base-face, face-base, vase-base, base-vase, face-vase, vase-face, bee-me, me-bee, knee-bee, bee-knee, knee-me, me-knee, breathe-breeze, breeze-breathe, day-they, they-day, lawn-yawn, yawn-lawn, ache-lake, lake-ache, ache-rake, rake-ache, ache-wake, wake-ache, lake-rake, rake-lake, lake-wake, wake-lake, rake-wake, wake-rake, sink-think, think-sink, chip-dip, dip-chip, sip-zip, zip-sip, chip-sip, sip-chip, chip-zip, zip-chip, dip-sip, sip-dip, dip-zip, zip-dip, pack-shack, shack-pack, tack-shack, shack-tack, pack-tack, tack-pack, tack-tag, tag-tack, rung-young, young-rung, rung-run, run-rung, young-run, run-young, pat-path, path-pat, bear-bell, bell-bear, thumb-tongue, tongue-thumb, comb-cone, cone-comb, mouse-mouth, mouth-mouse, cash-catch, catch-cash, fan-fang, fang-fan, sauce-saws, saws-sauce, bass-bath, bath-bass, cheese-chief, chief cheese, foam-phone, phone-foam, fuzz-fudge, fudge-fuzz, safe-shave. shave-safe, long-lawn, lawn-long, piece-peas, peas-piece, piece-peach, peach-piece.
peas-peach, peach-peas, wash-watch, watch-wash.
As before, the screen 2100 contains an ear/hand button 2102 for beginning a trial, a stop sign 2104 for ending the game, and a number score 2106. Within the number score 2106 are five acorns, indicating the processing level currently being tested. Also shown are progress creatures 21 14 indicating a number of correct responses. As a subject correctly responds to the game, a new progress creature 2114 is added. When the number of progress creatures 2114 reaches ten, a reward animation is provided and bonus points are awarded.
Referring to Figure 22, a screen shot 2200 is shown where the word pair peach-peas is being tested. After a subject listens to a prompt containing the target word, he/she selects one of the two pictures. The subject.
'_~ whether correct or incorrect, will then be shown the correct selection, in this case peach, by having the mask removed from the picture frame 2202.

SU85TITUTE SHEET (RULE 25) Referring now to Figure 23. operation of the Phonic Words game is illustrated by tlowchart 2300. Please note that five processing levels. similar to those used above in Phonic Match and Block Commander, are shown in table 2340. The game begins at block 2302 and proceeds to training block 2304.
.4t training block 2304 the subject is prompted to "press the ear button". The prompting is processed at level I (duration 150°r'o. emphasis 20dB). Flow then proceeds to decision block 2306.
At decision block 2306, a determination is made as to whether the earihand button 2102 has been pressed.
if not, then flow proceeds back to block 2304 where the prompting is repeated.
if the earihand button 2102 has been pressed, flow proceeds to block 2308.
At block 2308, praise is played for the subject. Flow then proceeds to block 2310.
At block 2310, a single image appears in one of the two frames 2108. 21 10, and a sound file pertaining to the image is played for the subject. Flow then proceeds to decision block 2312.
At decision block 2312, a determination is made as to whether the subject has selected the appropriate image. The image continues to be displayed until the subject selects the image. Flow then proceeds to decision block 2314.
At decision block 2314, a determination is made as to whether the subject has correctly selected the single image, three times. If not, then flow proceeds back to block 2310 where another image is presented, with its associated word. if the subject correctly selects an image/word combination three times. flow proceeds to block 23 I 6.
At block 2316, a pair of images are presented, along with a command prompt containing a word associated with one of the images. The other image presented is termed the distractor image. The user must click on the correct image 4 out of 5 times in a sliding scale to start the game. After the double image is presented, flow proceeds to decision block 2318.
At decision block 2318, a determination is made as to whether the subject has correctly selected an image.
from the image pair, in 4 out of 5 cases, on a sliding scale. If not, then flow proceeds back to block 2316 where another image pair is presented. Otherwise, flow proceeds to block 2320 where the subject enters the game.
Flow then proceeds to block 2322.
At block 2322, a subject is presented a sequence of image pairs, with associated words selected from a particular processing set. The processing sets are chosen by grouping words having similar phoneme characteristics. Once all of the words have been presented within a processing set. flow proceeds to decision block 2324.
At decision block 2324, a determination is made as to whether the subject has correctly understood a word.
and properly selected its associated picture from the picture pair with 90% or greater accuracy. If not, flow proceeds back to block 2322 where random selection of image/word pairs continue, until a 90% success rate is achieved. Flow then proceeds to block 2326.
At block 2326, a new processing set is selected. Flow then proceeds to decision block 2328.
At decision block 2328, a determination is made as to whether all of the processing sets have been completed. If not, then flow proceeds back to block 2322 where random selection of image~word pairs are SUBSTITUTE SHEET (RULE 26) WO 99/31640 PCT/US98l26528 presented from the current processing set. However. if all of the processing sets have been completed. flow proceeds to block 2330.
At block 3330, the processing level is incremented. Initially, the processing level is level 1. .After a subject has completed all of the processing sets, with a 90% or greater accuracy for each of the sets. the processing level is increased to level 2. As described above. the duration of the words is decreased first. from I50%, to 125°r° to 100%, and then the emphasis of selected frequency envelopes is reduced, from 20dB. to IOdB, to OdB. until normal speech (level 5) is obtained. After the processing level is incremented, flow proceeds to decision block 2332.
At decision block 2332, a determination is made as to whether a subject has completed all of the sets at processing level S. !f not, then flow proceeds back to block 2322 where random selection of image/word pairs within a set are presented at the new processing level. However, if the subject has completed all of the processing sets at level 5, flow proceeds to block 2334.
At block 2334. Phonic Words continues to drill the subject randomly selecting imageiword pairs within a processing set, at level 5.
Now referring to Figure 24, a screen shot 2400 is provided for the sixth game in the Fast ForWord program, entitled Phoneme identification. Phoneme Identification processes a number of phoneme pairs by selectively manipulating parameters such as consonant duration, consonant emphasis, and inter-stimulus interval. More specifically, five phoneme pairs are tested, each pair containing a target sound and a distractor.
These include: I ) aba-ada; 2) ba-da; 3) be-de; 4) bi-di; and 5) va-fa.
For each phoneme pair, 26 different skill levels are provided, each level differing from the other in the degree of processing applied (duration and emphasis), and in the separation (ISI) of the distractor and target phoneme. Skill level I processes the phoneme pair by stretching the consonant portion 150% while leaving the vowel portion untouched, emphasizing selected frequency envelopes in the consonant portion 20dB, and separating the distractor and target phonemes by SOOms, for example. Skill level 26 provides a phoneme pair without stretching or emphasis, and with an ISI of Oms. Skill levels 2-25 progress towards normal speech by applying less and less consonant processing, with less and less separation between the distractor and tareet phonemes.
The screen 2400 contains an ear/hand button 2402 to allow a subject to begin a trial, a number score 2404 for tracking correct responses, a stop sign 2406 for exiting the game, a hand selector 2408. and progress elements 2410 for graphically illustrating progress to a subject. When the game is initially selected, five different animals are shown on the screen, each pertaining to a phoneme pair to be tested. A subject may select any one of the five animals to begin the game. After a subject has played the game with one of the five animals, the choice is reduced to four animals, and so on.
Referring to Figure 25, a screen shot 2500 is shown with two polar bears 2502, 2504. In one embodiment.
3~ the polar bears 2502, 2504 are associated with the phoneme pair ba-da.
There are five background scenes, each associated with an animal/phoneme pair, each having their own animations, etc.
When a subject presses the ear/hand button 2402, the game plays a target phoneme, either ba or da. The phoneme pair is then presented by SUBSTITUTE SHEET (RULE 25~

the polar bears 2502, 2504 with one bear speaking the distractor and the other bear speaking the target sound. A
subject is required to distinguish between the distractor and target phonemes.
and to select with the hand selector 2508, the polar bear that spoke the target phoneme. Details of how the same Phoneme Identification is played will now be provided with reference to Figures 26 and 27.
Referring to Figure 26. a flow chart 2600 is shown that illustrates the training module of the Phoneme Identification game. Training begins at block 2602 and proceeds to block 2604.
At block 2604, the game presents the screen shot 2400, and prompts a subject to "press the ear button".
Flow then proceeds to decision block 2606.
At decision block 2606, a determination is made as to whether the subject has pressed the ear/hand button 2402. !f not, then flow proceeds back to block 2604 where the prompt is repeated, after a predetermined interval. If the subject has pressed the ear/hand button 2402, flow proceeds to block 2608.
At block 2608, the ear/hand button 2402 is presented, but this time without an audio prompt. Flow then proceeds to decision block 2610.
At decision block 2610, a determination is made as to whether the subject has pressed the ear/hand button 1 S 2402. If not, then flow proceeds back to block 2608. The subject remains in this loop until the ear/hand button 2402 is pressed. Once the ear/hand button 2402 is pressed, flow proceeds to block 2612.
At block 2612, a target phoneme, pertaining to a selected animal pair, is played for a subject. The target phoneme is processed at level l, 150% duration, with 20dB emphasis, as shown by the table 2640. Flow then proceeds to block 2614.
At block 2614, a single animal is presented that speaks the target phoneme.
Flow then proceeds to decision block 2616.
At decision block 2616, a determination is made as to whether the animal that spoke the target phoneme has been selected. If not, flow proceeds back to block 2614 where the animal again speaks the target phoneme, after a predetettnined interval. However, if the subject has selected the animal, flow proceeds to decision block ?~ 2618.
At decision block 2618, a determination is made as to whether the subject has correctly pressed the animal in ten trials. if not, then flow proceeds back to block 2612 where another trial is begun. However, once the subject has correctly responded in ten trials, flow proceeds to block 2620.
At block 2620, a target phoneme is again presented, at level 1 processing.
Flow then proceeds to block 2622.
At block 2622, two animals are now presented, one speaking the target phoneme, the other speaking the distractor phoneme. The order of speaking the target and distractor phonemes is random, with the animal on the left speaking first, and the animal on the right speaking last. However, in this training level, the animal that speaks the target phoneme is visually highlighted for the subject. Both the target and distractor phonemes are processed at level I, and are separated in time by SOOms. Flow then proceeds to decision block 2624.
At decision block 2624, a determination is made as to whether the subject has correctly selected the animal speaking the target phoneme in 8 out of 10 trials, on a sliding scale. If not, then flow proceeds back to block SU85 T ITUTE SHEET (RULE 26~

2620 where another trial is begun. If the subject has correctly responded in 8 out of 10 trials, flow proceeds to block 2626.
At block 2636, a target phoneme is presented to a subject, processed at level I. Flow then proceeds to block 2628.
At block 3628, two animals are shown presenting a target phoneme and a distractor phoneme. both processed at level I. with an ISI of SOOms. The order of targeudistractor phonemes is random. For this trial.
however, the animal speaking the target phoneme is not visually highlighted for the subject. Flow then proceeds to decision block 2630.
At decision block 2630, a determination is made as to whether the subject has correctly responded to 8 out !0 of l0 trials, on a sliding scale. If so, then the subject has successfully completed the training and flow proceeds to block 2634, allowing the subject to advance to the game. However, if the subject has not been successful in 8 out of 10 trials, then flow proceeds to decision block 2632.
At decision block 2632, a determination is made as to whether the subject has responded correctly less than 70% of the time in at least 10 trials. If not, then flow proceeds back to block 2626 where another trial is IS presented. if the subject has less than a 70% success rate, over at least 10 trials, then flow proceeds back to block 2614 where trials begin again, but where visual highlighting of the animal speaking the target phoneme is provided for the subject.
Referring now to Figure 27, a flow chart 2700 is provided that illustrates play of the Phoneme Identification game. Play begins at block 2702 and proceeds to decision block 2704.
20 At decision block 2704, a determination is made as to whether the ear/hand button 2402 has been pressed.
If not, then flow proceeds back to decision block 2704 until the subject chooses to hear the target phoneme. If the ear/hand button 2402 has been pressed, flow proceeds to block 2706.
At block 2706 a target phoneme is presented at an appropriate processing level. if this is the first time a subject has played the game, then the processing level for the phonemes is level 1, and the ISf between the 25 target and distractor phonemes is 500ms. Othetwvise, the skill level pertains to the historical success of the subject, with the particular phoneme pair, as will be further described below.
Flow then proceeds to block 2708.
At block 2708, two animals are shown, corresponding to the phoneme pair being tested, speaking the processed target and distractor phonemes, in random order. Flow then proceeds to decision block 2710.
At decision block 2710, a determination is made as to whether the subject has correctly selected the animal 30 speaking the target phoneme. if not, then flow proceeds to block 2720. If the subject has correctly responded to the trial, flow proceeds to decision block 2712.
At block 2720, the skill level for play is decremented. For example, if the processing level is at level 1, having consonant duration of 150%, and emphasis of 20db, but the 1S1 between the target and distractor phonemes is at 100ms. the game will drop back to a skill level where the ISI
is at 1 lOms. However, if the skill 35 level of play is already at level 1, then no change in processing is made.
At decision block 2712. a determination is made as to whether the subject has correctly responded in the last 3 consecutive trials. If not, then flow proceeds back to decision block 2704. awaiting another trial to begin.
z8 SU8ST1TUTE SHEET (RULE 26) However. if the subject has correctly responded to the last 3 trials, flow proceeds to block 2714. It should be appreciated that the procedure illustrated in blocks 3710-2713 is the 3-up. 1-down rule, previously described in the Circus Sequence game above.
At block 2714, the skill level of the game is incremented. For example, if a subject has correctly responded to 3 consecutive trials, and is at a processing level of 100%
duration. 20dB emphasis, and an ISI of Oms, the next level of play will be at 100°,~o duration, IOdB emphasis.
and an ISI of SOOms. Flow then proceeds to decision block 2716.
At decision block 2716, a determination is made as to whether the highest skill level has been reached. If the subject has correctly responded to the last 3 trials, with no processing of the phonemes, and with minimal ISl between the target and distractor, then flow proceeds to block 2718. Otherwise flow proceeds to decision block 2722, At decision block 2722, a determination is made as to whether the subject has reached a threshold. In one embodiment, a threshold is reached if the subject has had 8 skill level reversals within 6 skill levels of each other. If the subject has not reached a threshold, flow proceeds back to block 2704 where another trial is begun.
l S If the subject has reached a threshold, flow proceeds to block 2718.
At block 2718, a new stimulus category is selected. That is, a new phoneme pair is selected for testing.
Thus, if the subject has been tested with the phoneme pair ba-da, and has either mastered the pair by reaching the highest skill level, or has reached a threshold, then an alternate phoneme pair is selected, say aba-ada. Flow then proceeds back to block 2704 where a trial awaits using the new phoneme pair. In one embodiment, the skill level used for the new phoneme pair is selected to be S less than previously achieved for that pair. Or, if the subject has not yet been tested on the new phoneme pair, the skill level is set to I. Testing continues indefinitely, or for the time allotted for Phoneme Identification on the subject's daily training schedule.
Referring now to Figure 28, a screen shot 2800 is shown for the seventh game in the Fast Forward program, Language Comprehension Builder. The screen shot 2800 contains an ear/hand button 2802 for 2S beginning a trial, a stop sign 2804 for exiting the game, a number score 2806 corresponding to the number of correct responses, and level icons 2808 for indicating the processing level that is currently being tested. In addition, four windows 2810 are shown for containing one to four stimulus images, according to the particular trial being presented. If less than four stimulus images are required for a trial, they are placed randomly within the four windows 2810. At the bottom of the screen 2800 are smaller progress windows 2812 for holding progress elements. The progress elements provide a visual indicator to a subject of his/her progress. As in previously discussed games, when all of the progress elements are obtained, usually ten correct responses, a reward animation is presented to the subject. In one embodiment of this game, the reward animation builds a space ship out of the progress elements.
The stimulus that is provided to the subject is in the form of command sentences. The sentences are 3S divided into 7 comprehension levels, with each level having between 4 to 10 groups of sentences. Each group has 5 sentences. For each stimulus sentence, a corresponding image is provided, with 1-3 distractor images.
The subject is to listen to the stimulus sentence and select the corresponding image.

SUBSTITUTE SHEET (RULE 28~
*rB

Each of the stimulus sentences may be processed by stretching words, or selected phonemes, in time, and by emphasizing particular frequency envelopes, as shown by table 3040 in Figure 30. Stretching and emphasis of selected words/phonemes is similar to that described above in other games.
The stimulus sentences presented to a subject are provided in Appendix A.
Referring now to Figure 29. a flow chart 2900 is provided to illustrate the training tutorial aspect of the game. Training begins at block 2902 and proceeds to block 2904.
At block 2904, the subject is prompted to "press the yellow button". That is, the ear~hand button 2802.
Flow then proceeds to decision block 2906.
At decision block 2906, a determination is made as to whether the subject has selected the earihand button 2802. If not, flow proceeds back to block 2904 where the subject is again prompted, after a predetermined interval. If the subject has pressed the button, flow proceeds to block 2908.
At block 2908, the ear/hand button 2802 is presented, without audio prompting.
Flow then proceeds to decision block 2910.
At decision block 2910, a determination is made as to whether the subject has pressed the button 2802. If not, then the subject stays in this loop until the button 2802 is pressed.
Once pressed, flow proceeds to block 2912.
At block 2912, a subject is presented with a single image and corresponding audio stimulus. In one embodiment, the stimulus is processed at level 1, with 150% duration and 20dB
selective emphasis. Flow then proceeds to decision block 2914.
At decision block 2914, a determination is made as to whether the subject has selected the image corresponding to the presented stimulus. If not, then flow proceeds back to block 2912 where the subject is again prompted with the stimulus, after a predetermined interval. However, if the subject selected the image, flow proceeds to decision block 2916.
At decision block 2916, a detetrttination is made as to whether the subject has correctly selected an image, 3 times. If not, then flow proceeds back to block 2912 where another image/stimulus combination is presented.
However, if the subject has correctly selected an image, 3 times, flow proceeds to block 2918.
At block 2918, an image/stimulus combination is presented, along with a distractor image. Flow then proceeds to decision block 2920.
At decision block 2920, a determination is made as to whether the subject selected the appropriate image.
If not, then flow proceeds back to block 2918. However, if the subject selected the correct image, flow proceeds to decision block 2922.
At decision block 2922, a determination is made as to whether the subject has correctly responded to 4 out of 5 trials, on a sliding scale. If not, then flow proceeds back to block 2918. If the subject has correctly responded 4 out of the last 5 trials, flow proceeds to block 2924 allowing the subject to start the game.
Now referring to Figure 30, a flowchart 3000 is shown illustrating operation of the C,anguage Comprehension Builder game. The game begins at block 3002 and proceeds to block 3004.
SUBSTITUTE SHEET. (RULE 26) At block 3004 an image and stimulus combination is presented to the subject.
In one embodiment. the game begins by selecting a group from Level 2, and then by randomly selecting one of the trials from the selected group. The processing of the sentence is performed at 150% duration with 20d8 selected emphasis.
Flow then proceeds to decision block 3006.
At decision block 3006. a determination is made as to whether the subject correctly selected the image associated with the stimulus sentence. if not, the subject is shown the correct response, and flow proceeds back to block 3004 where another stimuius/image combination from the same group is presented. If the subject selects the correct image, flow proceeds to decision block 3008.
At decision block 3008, a determination is made as to whether all sentences within a stimulus set have been successfully completed. As mentioned above, the program begins in Level 2, by selecting a particular stimulus set for presentation. The program stays within the selected stimulus set until all stimulus sentences have been responded to correctly. The program then selects another stimulus set from within Level 2. If the subject has not correctly completed all sentences within a stimulus set, flow proceeds back to block 3004 where another sentence is presented. if the subject has completed all stimulus within a set, flow proceeds to decision block 3010.
At decision block 3010, a determination is made as to whether all sets within a particular comprehension level have been completed. if not, then a new set is selected, and flow proceeds back to block 3004. However.
if all sets within a comprehension level have been completed, flow proceeds to block 3012.
At block 3012, the comprehension level is incremented. In one embodiment, a subject proceeds through comprehension levels 2-6, in order, with levels 7 and 8 interspersed within levels 3-6. Flow then proceeds to decision block 3014.
At decision block 3014, a determination is made as to whether all comprehension levels have been completed. If not, then flow proceeds back to block 3004 where the subject is presented with an image/stimulus combination from a stimulus set within the new comprehension level. However, if the subject has progressed through all stimulus sets for all comprehension levels. flow proceeds to block 3016.
At block 3016, the processing level applied to the stimulus sets is increased.
The processing levels are shown in table 3040. For example, if a subject has just completed processing level 2, having a duration of 125%, and 20dB emphasis, the processing level is incremented to level 3. This will present all stimulus at 100%
duration, and 20dB emphasis. In addition, it will reset the comprehension level to level 2, and will restart the stimulus set selection. Flow then proceeds to decision block 3018.
At decision block 3018, a determination is made as to whether all processing levels have been completed.
If not, then flow proceeds back to block 3004 where a stimulus set from level 2 is presented to the subject, at the new processing level. However, if all the processing levels have been completed, the subject remains at processing level 5 (normal speech). Flow then proceeds to block 3020.
At block 3020, the comprehension levels are reset, so that the subject is presented again with stimulus from level 2. However. no alteration in the stimulus is performed. The subject will remain at processing level 5.

SUBSTITUTE SHEET (RULE 267 WO 99/31640 PCT/LfS98/Z6528 Study has shown that several weeks are required for a subject to advance through all of the comprehension levels, and all of the processing levels. Therefore, when a subject begins each day, he/she is started within the comprehension level. and stimulus set that was last played. And, the stimulus set will be presented at the processing level last played.
In Language Comprehension Builder, as in all of the other games. detailed records are kept regarding each trial, indicating the number of correct responses and incorrect responses. for each processing level. skill level and stimulus set. These records are uploaded to a central server at the end of each day, so that a subject's results may be tabulated and analyzed by an SLP, either working directly with a subject, or remotely. Based on analysis by the SLP, modification to training parameters within Fast ForWord may be made, and downloaded to the subject. This allows a subject to begin each day with a sensory training program that is individually tailored to his/her skill level.
The above discussion provides a detailed understanding of the operation of the present invention as embodied in the game modules within the program entitled Fast ForWord. Each of the game modules present different problems to a subject, using modified phonemes, frequency sweeps or speech commands that are I S stretched, emphasized or separated in time, according to the subject's ability, and according to predefined processing parameters within the program. Although alternative acoustic processing methodologies may be used, discussion will now be directed at algorithms developed specifically for use by the above described games.
In one embodiment, a two-stage speech modification procedure was used. The first stage involved time-scale modification of speech signals without altering its spectral content.
The time scale modification is called the "phase vocoder", and will be further described below. The second speech modification stage that was developed uses an algorithm that differentially amplifies and disambiguates faster phonetic elements in speech.
"Fast elements" in speech are defined as those that occur in the 3-30Hz range within an envelope of narrow-band speech channels of a rate changed speech signal. An emphasis algorithm for these fast elements was implemented using two methods: a filter-bank summation method and an overlap-add method based on a short-time Fourier transform. Both of these emphasis algorithms will be further described below.
Time-scale modification Referring to Figure 31, a flow chart 3100 is provided that illustrates time-scale modification of speech signals according to the present invention. Modification begins at block 3102 and proceeds to block 3104.
At block 3104, segmented digital speech input is provided to a processor. The segmented speech is assumed to be broadband and composed of a set of narrow-band signals obtained by passing the speech segment through a filter-bank of band-pass filters. The speech signals may be written as follows:
N
.f(~)'--~.fn(~) ~mi where SU8ST1TUTE SHEET' (RULE Z6) r fn(t)= jj(t)h(t-r)cos[~"lt-r)Jdr This is the convolution integral of the signal jlrl and hltl, a prototypical low-pass filter modulated by ros(c~"!yJ
where cor. is the center frequency of the filters in the filter-bank, an operation commonly referred to as heterodyning. Flow then proceeds to block 3106.
At block 3106, the above integral is windowed. and a short-term Fourier transform of the input signal is evaluated at the radian frequency ~n using an FFT algorithm. The complex value of this transform is denoted:
fn (t) -'~ F(~n ~t ) ~ COS[CO"/ + ~n (fin ~t)~
where ~on(co,"t) is the phase modulation of the carrier cos(con(r)J. Flow then proceeds to block 3108.
At block 3 I 08 the amplitude and phase of the STET is computed, It is known that the phase function is not a well behaved function, however its derivative, the instanteneous frequency, is bounded and is band limited.
Therefore, a practical approximation for j"(t) is:
fn (t)=~F(~n~l)~COS CV"t-i-~(P~n(QJn,I) where rp* is the instantaneous frequency. Flow then proceeds to block 3110.
At block 31 10 ~p* can be computed from the unwrapped-phase of the short-term Fourier transform. A
time-scaled signal can then be synthesized as follows by interpolating the short-term Fourier transform magnitude and the unwrapped phase to the new-time scale as shown below.
~) _ ~~ ~ F(~n ~~r) ~ COS ~ CO"! -~ f fP~n (CVn,t) where ,Q is the scaling factor which is greater than one for time-scale expansion. An efficient method to compute the above equation makes use of cyclic rotation and the FFT algorithm along with an overlap-add procedure to compute the short-time discrete Fourier transform. Appropriate choice of the analysis filters h(rl and interpolating filters (for interpolation of the short-term Fourier transform to the new time-scale) are important to the algorithm. In one embodiment, linear interpolation based on the magnitude and phase of the short-time Fourier transform was used. The analysis filter h(r) was chosen to be a Kaiser window multiplied by an ideal impulse response as shown:
h(n) _ ~ sinC ~ Jkaiser(n,6.8) where SU8ST1TUTE SHEET (RULE 26) *rB

l~,~a~~(1-~~n-N 2)lN 2~~
kaiser(n,a) _ .0 <_ n < l~' In(a) where lo(a) is the zeroth-order modified Bessel function of the first kind and N is the length of the analysis window over which the FFT is computed. Flow then proceeds to block 3112.
At block 3112, a short-term inverse FFT is computed to produce digital speech output. This output is then provided at block 3114.
Filter-bank emphasis al orithm Now referring to Figure 32, a flow chart 3200 is shown that illustrates implementation of an emphasis algorithm according to the present invention. The algorithm begins at block 3202 and proceeds to block 3204.
At block 3204, it is assumed that the speech signal can be synthesized through a bank of band-pass filters, as described above. This time, however, no heterodyning of a prototypical low-pass filter is used. Instead, a set of up to 20 second-order Butterworth filters with center frequencies logarithmically spaced between 100 and the nyquist frequency are used. The output of each band-pass filter resulted in a narrow-band channel signal f"(t~.
Flow then proceeds to block 3206.
At block 3206, we computed the analytical signal as follows:
an (n) = fn (n) + iH( f~ (n)) where H(n) is the Hilbert transform of a signal defined as:
H(n) = fn(n) *C 1 = jf~(z) ~ dz n(n _ z) The Hilbert transform was computed using the FFT algorithm. It is known that the absolute value of the analytical signal is the envelope of a narrow-band signal. Thus, an envelope a"Ink is obtained by the following operation:
er(nl =l a~(nll The envelope within each narrow-band channel is then band-pass filtered using a second order Butterworth filter with the cuff-offs set usually between 3-30Hz (the time scale at which phonetic events occur in rate chanced speech). The band pass filtered envelope is then rectified to form the new envelope as follows:
en«.(n) = S(en(n) * 8(n)) where S(x/=x for x>=0, otherwise S(x)=0 SU8ST1TUTE SHEET {RULE 26) and g(nl is the impulse-response of the band-pass second order Buttenvorth filter. Flow then proceeds to block 3208.
At block 3208, the signal is modified within each band-pass channel to carry this new envelope, as shown below:
~new~~) = fOn)'S e~n~~)) * hln) Flow then proceeds to block 3210.
At block 3210 the modified signal is obtained by summing the narrow-band filters with a differential gain for each channel as follows:
.~°ew~ ~ ~ ) _ ~ yyr.~ncw ~ n ) n where w" is the gain for each channel. The envelope is modified only within a specified frequency range from 1-IOKHz which normally spans about 16 channels. Flow then proceeds to block 3212.
At block 3212 segmented digital speech output is provided.
Overlap-add emphasis aleorithm Referring to Figure 33, a flow chart 3300 for an alternative emphasis algorithm is provided. This I S algorithm improves upon the filter-bank summation described above by making use of the property of equivalence between the short-time Fourier transform and the filter-bank summation algorithm. In this embodiment, the short-time Fourier transform is computed using an overlap-add procedure and the FFT
algorithm. Flow begins at block 3302 and proceeds to block 3304.
At block 3304, the short-time Fourier transform is computed over a sliding window given by the following equation:
.Yk(r) _ ~h~r -n)x(n)e-~~'~'xm m.-m where h(n~ is a Hamming window and the overlap between sections was chosen to be less than a quarter the length of the analysis window. The envelope can then be obtained within narrow-band channels from the absolute value of the short-time Fourier transform. The number of narrow-band channels is equal to half the size of the length over which the FFT is computed.
The energy of the envelope within critical band channels is then averaged, as shown:
Jnlr) _ ~~ Xk ~r) C'"_, Sk 5('"
where C" is the corner-frequency of the critical-band channel n. At present, critical-band frequencies for children with LLI are unknown, therefore the present invention approximates the bands using parameters proposed by Zwicker. See E. Zwicker and E. Terhardt, "AnalvticaJ erpressions for critical-band rate and SU6STiTUTE SHEET (RULE 25) WO 99/31640 PCT/US98/2b528 critical bandundth as a lunctrvn vI jreguencv. " J. .~Icoust. Svc. .9nr~~..
vol. 68, pp. 1533-?~. 19$0. .~s critical band frequencies for children with LL! become available. they can be incorporated into the present invention.
The envelope within each critical-band channel is then band-pass-f ltered with cut off s set usualh between 3-30Hz with type 1 linear phase FIR equiripple filters. The band-pass filtered envelope is then threshold rectified. In contrast to the filter-bank emphasis algorithm. the modified envelope is added to the original envelope to amplify the fast elements while not distorting the slower modulations. This is _iven by the followin~~ equation:
e,~(n) where, T(.r)=x+I forx>=0, otherwise U
Flow then proceeds to block 3308.
At block 3308, a modified signal is obtained by summing the short-time Fourier transform using a weighted overlap-add procedure as shown below:
.~"eH~~~)_ ~ g,ln,s, ~ ~Xy~w~n)y2rvrklA' l l N k. to I S where g!n) is the synthesis filter which was also chosen to be a Hamming window. Flow then proceeds to block 3310.
At block 3310, windowing and over-lap addition for the algorithm is performed.
Flow then proceeds to block 3312 where segmented digital speech output is provided.
Although the present invention and its objects, features, and advantages have been described in detail.
other embodiments are encompassed by the invention. For example, a number of different games have been shown. each dealing with a stimulus set that is processed in the time domain, and presented to a subject in a manner that the subject can understand. The processing is designed to emphasize or stretch those components of speech that are the most difficult for an LLI subject to differentiate, so that they may be more easily understood. In addition, the processing allows distinct frequency sweeps, phonemes or words to be separated in time, first by a significara amount, say SOOms, with the amount of separation being gradually reduced to that of normal speech. Once a subject gains success at distinguishing between similar speech elements, at a high level of processing. the amount of processing is gradually reduced until it reaches the level of normal speech. The particular games used to train subjects have distinct advantages, but are not exclusive. Other games are anticipated that will incorporate the novel aspects of the present invention, to further train a subject's temporal processing ability to recognize and distinguish between short duration acoustic events that are common in speech.
Furthermore, the Fast ForWord program has been shown for execution on a personal computer. connected to a central server. However, as technology advances, it is envisioned that the program could be executed either SUBSTITUTE SHEET (RULE 26) WO 99/31640 PCT/US98/2ti528 by a diskless computer attached to a server. by a handheld processing device.
such as a laptop. or eventually by a palmtop device such as a Nintendo GameBoy. As Ion. as the graphical images and auditon prompts can be presented in a timely fashion, and with high quality. the nature of the device used to present the material is irrelevant.
Those skilled in the art should appreciate that they can readily use the disclosed conception and specific embodiments as a basis for designing or modifying other structures for carrying out the same purposes of the present invention without departing from the spirit and scope of the invention as defined by the appended claims.

SU85T1TUTE SHEET (RULE 26) APPENDIX A
Level 2 I. Attribute~Stative _;
Stimulus: The ball is big The man is big.
The bail is small.
The ball is big.
Stimulus: The cup is broken The wind up toy car is broken.
The cup is broken.
The cup is not broken.
IS
Stimulus: The baby is crying The babv is not crying.
The boy is crying.
The baby is crying.
Stimulus: The box is open The box is open.
The box is closed.
The can is open.
Stimulus: The girl is dirty The girl is dirty The shoe is dirty.
The shoe is not dirty.
~. Simple Negation Stimulus: The boy is not eating The boy is not eating.
The boy is eating.
Stimulus: The boy is not riding The boy is not riding.
The boy is riding.
Stimulus: The baby is not crying The baby is crying.
The baby is not crying.
Stimulus: The boy does not have a balloon The bov has a balloon.
The bov does not have a balloon.
Stimulus: The eirl does not have shoes The girl does not have shoes.
The girl has shoes.
3. Object Pronouns: Him&Her Stimulus: Point to her.
3$
SUBSTITUTE SHEET (RULE 26~

.4 eirl doll.
A bov doll.
Stimulus: Point to him A eirl doll.
A bov doll.
Stimulus: Point to her.
A girl doll.
A boy doll.
Stimulus: Point to him.
A girl doll.
A boy doll.
IS
Stimulus: Point to her.
A girl dolt.
A boy do! 1.
4. Possession Stimulus: The clown has a balloon.
The clown has a balloon The clown has a flower, The bov has a balloon.
Stimulus: The doe has spots.
The dog does not have spots.
The boy has spots.
The dog has spots.
Stimulus: The tree has apples.
The girl has apples.
The nee has apples.
The tree does not have apples.
Stimulus: The bunny has a carrot.
The bunny does not have a carrot.
The cat has a carrot.
The bunny has a carrot.
Stimulus: The sirl has shoes.
The girl has shoes.
The girl does not have shoes.
The clown has shoes.
5. Lexicon Descrietions Stimulus: Point to dark.
Dirty example.
Down example.
Dark example.
Big example.
Stimulus: Point to big.
Blue example.

SUBSTITUTE SHEET (RULE 26~

WO 99!31640 PCTNS98/26528 Bi= example.
Open example.
Small example.
Stimulus: Point to dirty.
Dirty example.
Dark example.
Hot example.
Down example.
Stimulus: Point to yellow Yellow example.
Small example Broken example.
1 ~ Blue example.
Stimulus: Point to off:
On example.
Open example.
Off example.
Wet example.
6. Lexicon Action Words Stimulus: Point to wash.
Sweep example.
Fall example.
Wash example.
Run example.
Stimulus: Point to run.
Sing example.
Jump example.
Write example.
Run example.
Stimulus: Point to kick.
Throw example.
Kick example.
Push example.
Cry example.
Stimulus: Point to drink.
Eat example.
4~ Run example.
Drink example.
Tear example.
Stimulus: Point to pull.
Pull example.
Push example.
Throw example.
Play example.
5~ Level3 SU85TITUTE SHEET (RULE 26) *rB

7. Noun Sinsular; Plural (Bv Inflection Onlv)I
Stimulus: Point to the picture of the cups.
Point to the picture of the cup.
Point to the picture of the cups.
Stimulus: Point to the picture of the boat.
Point to the picture of the boat.
Point to the picture of the boats.
Stimulus: Point to the picture of the balloons.
Point to the picture of the balloons.
Point to the picture of the balloon.
I 5 Stimulus: Point to the picture of the houses.
Point to the picture of the house.
Point to the picture of the houses.
Stimulus: Point to the picture ofthe cat.
Point to the picture of the cat.
Point to the picture of the cats.
8. Qualifiers: None ?~ Stimulus: Which clown has none?
The clown has many.
The clown has some.
The clown has one.
The clown has none.
Stimulus: Which tree has none?
The tree has none.
The tree has one.
The tree has some.
3~ The tree has many.
Stimulus: Which dog has none?
The dog has some.
The dog has many.
The dog has none.
The dog has one.
Stimulus: Which duck has none?
The duck has many.
The duck has none.
The duck has one.
The duck has some.
Stimulus: Which wagon has none?
The wagon has some.
The wagon has none.
The wagon has many.
The wagon has one.
s~ 9. Subject Relativization SU8ST1TUTE SHEET (RULE Z6) WO 99/31640 PCT/US98/Zb528 Stimulus: The boy who is mad is pulling the girl.
The bov who is happy is pulling the girl.
The bov who is mad is pulling the girl.
The girl who is mad is pulling the boy.
The girt who is happy is pulling the boy.
Stimulus: The girl who is mad is pushing the boy'.
The girl who is happy is pushing the boy.
The boy who is mad is pushing the girl.
The girl who is mad is pushing the boy.
The bov who is happy is pushing the girl.
Stimulus: The clown who is big is chasing the girl.
The clown who is big is chasing the girl.
I 5 The girl who is little is chasing the clown.
The girl who is big is chasing the clown.
The clown who is little is chasing the girl.
Stimulus: The girl who is happy is pulling the boy.
The boy who is happy is pulling the girl.
The girl who is mad is pulling the boy.
The boy who is mad is pulling the girl.
The girl who is happy is pulling the boy.
Stimulus: The clown who is little is chasing the girl.
The girl who is little is chasing the clown.
The clown who is big is chasing the girl.
The girl who is big is chasing the clown.
The clown who is little is chasing the girl.
10. Active Voice Word Order Stimulus: The boy is pulling the girl.
The boy is pulling the ball.
The girl is pulling the boy.
The boy and the girl are pulling the wagon.
The boy is pulling the girl.
Stimulus: The boy is pushing the girl.
The girl is pushing the boy.
The boy is pushing the girl.
The boy is pushing the clown.
They are pushing the ball.
Stimulus: The girl is pulling the boy.
The girl is pulling the boy.
They are pulling the wagon.
The dog is pulling the boy.
The boy is pulling the girl.
Stimulus: The boy is kicking the girl.
They are kicking the balls.
The girl is kicking the boy.
The clown is kicking the girl.
The bov is kicking the girl.

SUBSTITUTE SHEET (RULE 26~
*rB

Stimulus: The girl is washing the bov.
They are washing the tub.
The ;irl is washing the dog.
The girl is washing the boy.
The boy is washing the girl.
Comparative with More Stimulus: Which one is more happy Which one is less happy Which one is more happy Stimulus: Which one is more hairy' Which one is more hairy°
1 ~ Which one is less hairy?
Stimulus: Which one is more skinny°
Which one is less skinny?
Which one is more skinny?
Stimulus: Which one is more dirty°
Which one is more ditty?
Which one is less ditty?
2~ Stimulus: Which one is more messy'' Which one is more messy?
Which one is less messy?
12. Reduced Subject Relative Clauses Stimulus: The boy frowning is pulling the girl.
The girl frowning is pulling the boy.
The boy smiling is pulling the girl.
The girl smiling is pulling the boy.
The boy frowning is pulling the girl.
Stimulus: The boy smiling is pushing the girl.
The girl frowning is pushing the boy.
The boy smiling is pushing the girl.
The girl smiling is pushing the boy.
The boy frowning is pushing the girl.
Stimulus: The boy frowning is pushing the girl.
The boy frowning is pushing the girl.
The girl smiling is pushing the boy.
The boy smiling is pushing the girl.
The girl frowning is pushing the boy.
Stimulus: The girl smiling is pushing the boy.
The boy smiling is pushing the girl.
The girl frowning is pushing the boy.
The girl smiling is pushing the boy.
The boy frowning is pushing the girl.
Stimulus: The girl frowning is pulling the boy.
The girl frowning is pulling the boy.

SU8ST1TUTE SHEET (RULE 26) *rB

The boy frownin~~ is pulling the girl.
The girl smilin~~ is pulline the boy.
The boy smiling is puilin_ the girl.
13, Complex Negation Stimulus: The clown that is not on the box is little.
The clown that is not on the box is little.
The clown that is on the box is little.
The clown that is not on the box is big.
The clown chat is on the box is big.
Stimulus: The girl that is chasing the clown is not big The clown that is chasing the girl is big.
The girl that is chasing the clown is not big.
The clown that is chasing the girl is not big.
The girl that is chasing the clown is big.
Stimulus: The bov that is not sitting is looking at the girl.
The girl that is not sitting is not looking at the boy.
The girl that is not sitting is looking at the boy The boy that is not sitting is looking at the girl.
The boy that is not sitting is not looking at the girl.
Stimulus: The clown that is big is not on the box.
The clown that is little is not on the box.
The clown that is big is on the box.
The clown that is little is on the box.
The clown that is big is not on the box.
Stimulus: The book that is not on the table is blue.
The book that is on the table is blue.
The book that is on the table is red, The book that is not on the table is blue.
The book that is not on the table is red.
14. Noun Singular/Plural (By Inflection One) Stimulus: Point to the picture of the watches.
Point to the picture of the watch.
Point to the picture of the watches.
Stimulus: Point to the picture of the tubs.
Point to the picture of the tub.
Point to the picture of the tubs.
Stimulus: Point to the picture of the hat.
Point to the picture of the hats.
Point to the picture of the hat.
Stimulus: Point to the picture of the bunny.
Point to the picture of the bunnies.
Point to the picture of the bunny.
Stimulus: Point to the picture of the socks.
Point to the picture of the sock.

SU8ST1TUTE SHEET. (RULE 26) WO 99/31640 PCTlUS98/26528 Point to the picture of the socks.
15. Comparative with/ -er/
Stimulus: Which one is happier?
Which one is happier?
Which one is happy?
Stimulus: Which one is hairier?
Which one is hairier?
Which one is more bald?
Stimulus: Which one is skinnier?
Which one is heavier?
I ~ Which one is skinnier?
Stimulus: Which one is dirtier?
Which one is dirtier?
Which one is cleaner?
Stimulus: Which one is messier?
Which one is messier?
Which one is cleaner?
2 ~ Level 4 16. Passive Voice Word Order Stimulus: The boy is being pushed by the girl.
The boy is being pushed by the clown.
The clown is being pushed by the girl.
The girl is being pushed by the boy.
The boy is being pushed by the girl.
Stimulus: The dog is being pulled by the clown.
The dog is being pulled by the boy.
The clown is being pulled by the dog.
The dog is being pulled by the clown.
The girl is being pulled by the clown.
Stimulus: The girl is being kicked by the boy.
The girl is being kicked by the boy.
The clown is being kicked by the boy.
The girl is being kicked by the clown.
The boy is being kicked by the girl.
Stimulus: The boy is being chased by the girl.
The dog is being chased by the girl.
The girl is being chased by the boy.
The boy is being chased by the girl.
The boy is being chased by the dog.
Stimulus: The boy is being kicked by the girl.
The girt is being kicked by the boy.
The boy is being kicked by the girl.
The clown is being kicked by the girl.
SUBSTITUTE SHEET (RULE 28) WO 99/31640 PCT/US98/2652$
The boy is beins kicked b~~ the clown.
17. Wh-Obiect Ouestionin~_ Stimulus: What is the cat chasing? (the mouse) What is the dog chasing? (the cat) What is the cat chasing? (the mouse) What is the mouse chasing? (the cat) i0 Stimulus: Who is the clown chasing? (the boy) Who is the eirl chasin~°
Who is the clown chasing?
Who is the boy chasing?
15 Stimulus: Who is the girl pushing? (the boy) Who is the mother pushing?
Who is the girl pushing?
Who is the boy pushing?
20 Stimulus: Who is the boy pulling? (the girl) Who is the girt pulling?
Who is the boy pulling?
Who is the clown pulling?
25 Stimulus: Who is the girl pulling? (the clown) Who is the clown pulling?
Who is the girl pulling?
Who is the boy pulling?
30 18. Quantifiers: Some Stimulus: Look at these wagons with deer.
Which wagon has some?
Which wagon has one?
35 Which waeon has some?
Which wagon has many?
Which wason has none?
Stimulus: Look at these clown with balloons.
40 Which clown has some?
Which clown has one?
Which clown has none?
Which clown has some?
Which clown has many?
Stimulus: Look at these ducks with babies?
Which duck has some?
Which duck has none?
Which duck has many?
Which duck has some?
Which duck has one?
Stimulus: Look at these trees with apples, Which tree has some?
Which tree has one?
Which tree has none?

SUBSTITUTE SKEET. (RULE 26) *rB

Which tree has many?
Which tree has some?
Stimulus: Look at these dogs with spots.
Which doS has some?
Which dos has some?
Which dog has none?
Which dos has one?
Which dog has many?
19. Verb Singular:
Stimulus: The fish swims.
The fish swims.
The fish swim.
Stimulus: The sheep stands.
The sheep stand.
The sheep stands.
Stimulus: The deer drinks.
The deer drinks.
The deer drink.
2~ Stimulus: The fish eats.
The fish eat.
The fish eats.
Stimulus: The sheep jumps.
The sheep jumps.
The sheep jump.
20. Tense and Aspect: ing Stimulus: The girl is opening the present.
The girl opened the present.
The girl is opening the present.
The girl will open the present.
Stimulus: The boy is washing his face.
The boy is washing his face.
The bov will wash his face.
The bov washed his face.
Stimulus: The boy is pouring the juice.
The boy will pour the juice.
The boy poured the juice.
The boy is pouring the juice.
Stimulus: The girl is blowing up the balloon.
The girl is blowing up the balloon.
The girl «~ill blow up the balloon.
The girl blew up the balloon.
5~ Stimulus: The boy is eating his dinner.
The bov will eat his dinner. _ SUBSTITUTE SHEET (RULE 26~

The bov ate his dinner.
The bov is eating his dinner.
21. Noun Plurals/Sineulars marked by Quantifier Stimulus: Point to the picture of some socks.
Point to the picture of a sock.
Point to the picture of some socks.
Stimulus: Point to the picture of a bag.
Point to the picture of a bag.
Point to the picture of some bags.
Stimulus: Point to the picture of some dresses.
t ~ Point to the picture of some dresses.
Point to the picture of a dress.
Stimulus: Point to the picture of a bunny.
Point to the picture of a bunny.
Point to the picture of some bunnies.
Stimulus: Point to the picture of some dogs.
Point to the picture of a dog.
Point to the picture of some dogs.
40. Noun Plurals/Sin~ulars marked by Quantifier Inflection Note # above is for programatic purposes only. Level is correct.
Stimulus: Point to the picture of some balloons.
30 Point to the picture of a balloon.
Point to the picture of some balloons.
Stimulus: Point to the picture of a tree.
Point to the picture of some trees.
35 Point to the picture of a tree.
Stimulus: Point to the picture of some cats.
Point to the picture of some cats.
Point to the picture of a cat.
Stimulus: Point to the picture of some boxes.
Point to the picture of some boxes.
Point to the picture of a box.
Stimulus: Point to the picture of a cake.
Point to the picture of a cake.
Point to the picture of some cakes.
22. Aux-Be Singular Stimulus: The fish is swimming The fish is swimming.
The fish are swimming.
5~ Stimulus: The sheep is standing.
The sheep are standing. .

SUBSTITUTE SHEET (RULE 26~

The sheep is standing.
Stimulus: The deer is drinking.
The deer is drinkin_.
The deer are drinking.
Stimulus: The fish is eating.
The fish are eating.
The fish is eating Stimulus: The sheep is jumping.
The sheep is jumping.
The sheep are jumping.
Level 5 23. Case Markin> Prepositions: For Stimulus: Show me the groceries being carried for mom Show me the groceries being carried for mom Show me the groceries being carried with mom Show me the groceries being carried by mom.
Stimulus: Show me the breakfast made for mom.
Show me the breakfast made by mom.
Show me the breakfast made for mom.
Show me the breakfast made with mom.
Stimulus: Show me the drawing made for the boy.
Show me the drawing of the boy.
Show me the drawing made by the boy.
Show me the drawing made for the boy.
Stimulus: Show me the suitcase being carried for the man Show me the suitcase being carried with the man.
Show me the suitcase being carried for the man.
Show me the suitcase being carried by the man.
Stimulus: Show me the painting made for the girl Show me the painting made for the girl.
Show me the painting made by the girl.
Show me the painting made of the girl.
24. Tense and Aspect: -ed Stimulus: The girl painted a picture.
The girl painted a picture.
The girl will paint a picture.
The girl is painting a picture.
Stimulus: The man sewed a shirt.
The man will sew a shirt.
The man is sewing a shirt.
The man sewed a shirt.

SUBSTITUTE SHEET (RULE 26) Stimulus: Someone tied the shoe.
Someone is tying the shoe.
Someone tied the shoe.
Someone will tie the shoe.
Stimulus: The boy tripped over the rock.
The boy tripped over the rock.
The boy is tripping over the rock.
The boy will trip over the rock.
Stimulus: The mother dressed the baby.
The mother is dressing the baby.
The mother will dress the baby.
The mother dressed the baby.
l~
'S. Aux-Be Plural Stimulus: The deer are eating.
The deer is eating.
The deer are eating Stimulus: The sheep are jumping.
The sheep are jumping.
The sheep is jumping.
Stimulus: The fish are swimming.
The fish is swimming.
The fish are swimming.
Stimulus: The deer are standing.
The deer is standing.
The deer are standing.
Stimulus: The sheep are eating.
The sheep are eating.
The sheep is eating.
26. Third Person Subject Pronouns Stimulus: Point to they are sitting.
Point to she is sitting.
Point to they are sitting.
Point to he is sitting.
Stimulus: Point to she is jumping.
Point to they are jumping.
Point to he is jumping.
Point to she is jumping.
Stimulus: Point to he is standing.
Point to he is standing.
Point to they are standing.
Point to she is standing.
5~ Stimulus: Point to she is kicking.
Point to thev are kickine.
SUBSTITUTE SHEET. (RULE 26~

Point to she is kickin_«.
Point to he is kicking Stimulus: Point to they are eating.
Point to he is eatine.
Point to she is eating.
Point to they are eating.
Level 6 27. Tense and Aspect: will Stimulus: The _irl will open the present.
The girl opened the present.
I ~ The girl is opening the present.
The girl will open the present.
Stimulus: The boy will eat his dinner.
The boy ate his dinner.
The boy will eat his dinner.
The boy is eating his dinner.
Stimulus: The boy will trip over the rock.
The boy will trip over the rock.
The boy is tripping over the rock.
The boy tripped over the rock.
Stimulus: The man will sew his shin.
The man is sewing his shirt.
The man sewed his shirt.
The man will sew his shirt.
Stimulus: The girl will paint a picture.
The girl is painting a picture.
The girl will paint a picture.
The girl painted a picture.
28. Possessive Morpheme /'s/
Stimulus: Show me the baby bear.
Show me the baby's bear.
Show me the baby beat.
Stimulus: Show me the chicken's dinner.
Show me the chicken's dinner.
Show me the chicken dinner.
Stimulus: Show me the mama cat.
Show me the mama cat.
Show me the mama's cat.
Stimulus: Show me the babv_ 's duck.
Show me the babv duck.
Show me the baby's duck.
Stimulus: Show me the baby's bunny.

SUBSTITUTE SHEET. (RULE 261 Show me the baby bunny.
Show me the baby's bunny.
39. Case Marking Prepositions: W ith Stimulus: Show me the breakfast made with mom.
Show me the breakfast made for mom.
Show me the breakfast made by mom.
Show me the breakfast made with mom.
Stimulus: Show me the suitcase being carried with the man.
Show me the suitcase beine carried by the man Show me the suitcase beine carried with the man.
Show me the suitcase beine carried for the man.
Stimulus: Show me the baby walking with the girl.
Show me the baby walking with the girl.
Show me the baby walking to the girl.
Show me the baby walking from the girl.
Stimulus: Show me the boy running with the girl.
Show me the boy running with the girl.
Show me the boy running from the girl.
Show me the boy running to the girl.
Stimulus: Show me the groceries being carried with mom Show me the groceries being carried for mom.
Show me the groceries being carried with mom.
Show me the groceries being carried by mom.
30. Double Embeddin Stimulus: The clown that is chasing the girl that is little is big.
The clown that is chasing the girl that is big is little.
The clown that is chasing the girl that is little is little.
The clown that is chasing the girl that is big is big.
The clown that is chasing the girl that is little is big.
Stimulus: The clown that is holding the balloon that is red is blue.
The clown that is holding the balloon that is blue is blue.
The clown that is holding the balloon that is red is blue.
The clown that is holding the balloon that is red is red.
The clown that is holding the balloon that is blue is red.
Stimulus: The girl that is chasing the clown that is big is little.
The girl that is chasing the clown that is big is big.
The girl that is chasing the clown chat is little is big.
The girl that is chasing the clown that is big is little.
The girl that is chasing the clown that is little is little.
Stimulus: The clown that is holding the balloon that blue is red.
The clown that is holding the balloon that is blue is red.
The clown that is holding the balloon that is red is red.
The clown that is holding the balloon that is red is blue.
The clown that is holding the balloon that is blue is blue.

SUBSTITUTE SHEET {RULE 26) Stimulus: The girl that is chasing the clown that is little is big.
The girl that is chasing the clown that is big is big.
The girt that is chasing the clown that is little is little.
The girl that is chasing the clown that is little is big.
The girl that is chasing the clown that is big is little.
31. Relativized Subject Endinst in N-V-N
Stimulus: The girt who is pushing the boy is happy The boy who is pushing the girl is mad.
The girl who is pushing the boy is mad.
The girt who is pushing the boy is happy.
The boy who is pushing the girl is happy.
I ~ Stimulus: The clown who is chasing the girl is little The clown who is chasing the girl is little.
The girl who is chasing the clown is little.
The girl who is chasing the clown is big.
The clown who is chasing the clown is big.
Stimulus: The boy who is pulling the girl is mad.
The girl who is pulling the boy is happy.
The boy who is pulling the girl is mad.
The boy who is pulling the girl is happy.
The girl who is pulling the boy is mad.
Stimulus: The girl who is chasing the clown is little.
The clown who is chasing the girl is big.
The girl who is chasing the clown is big.
The girl who is chasing the clown is little.
The clown who is chasing the girl is little.
Leve! 7 32. Obiect Relativization Stimulus: The girl is chasing the clown who is big.
The clown is chasing the girl who is little.
The girl is chasing the clown who is big.
The clown is chasing the girl who is big.
The girl is chasing the clown who is little.
Stimulus: The boy is pushing the girl who is happy.
The boy is pushing the girl who is happy.
The girl is pushing the boy who is happy.
The girl is pushing the boy who is mad.
The boy is pushing the girl who is mad.
Stimulus: The girl is pulling the boy who is mad The boy is pulling the girl who is mad.
The boy is pulling the girl who is happy.
The girt is pulling the boy who is happy.
The girl is pulling the boy who is mad.
5~ Stimulus: The bov is pushing the girl who is mad The boy is pushing the girl who is mad.

SUBSTITUTE SHEET (RULE 26) The girl is pushing the boy who is mad.
The girl is pushing the boy who is happy.
The boy is pushing the girl who is happy.
Stimulus: The girl is chasing the clown who is little.
The clown is chasing the girl who is little.
The girl is chasing the clown who is big.
The girl is chasing the clown who is little.
The clown is chasing the girl who is big.
33. Reduced Subject Relative Clauses ending in -V-N
Stimulus: The girl pushing the boy is smiling The girl pushing the boy is smiling.
The boy pushing the girl is smiling.
The girl pushing the boy is frowning.
The boy pushing the girl is frowning.
Stimulus: The clown chasing the girl is little The clown chasing the girl is big.
The girl chasing the clown is big.
The clown chasing the girl is little.
The girl chasing the clown is little.
Stimulus: The girl pulling the boy is frowning.
The girl pulling the boy is smiling.
The boy pulling the girl boy is frowning.
The boy pulling the girl is smiling.
The girl pulling the boy is frowning.
Stimulus: The girl chasing the clown is little.
The clown chasing the girl is little.
The girl chasing the clown is little.
The clown chasing the girl is big.
The girl chasing the clown is big.
Stimulus: The boy pulling the girl is frowning The girl pulling the boy is smiling.
The boy pulling the girl is frowning.
The girl pulling the boy is frowning.
The boy pulling the girl is smiling.
34. Who vs. What Stimulus: What is in the wagon? (ball) Is nothing in the wagon?
Who is in the wagon?
What is in the wagon? (ball) Stimulus: Who is in the tub? (man) What is in the tub?
Who is in the tub? (man) Is nothing in the tub?
Stimulus: What is under the table (cup) Who is under the table?

SU8ST1TUTE SHEET. (RULE 26) What is under the table?
Is nothing under the table?
Stimulus: What is on the chair? (ball) What is on the chair? (ball) Is nothing on the chair?
Who is on the chair?
Stimulus: Who is on the box (clown) Who is on the box?
What is on the box?
is nothing on the box?
IS
Level 8 35. Verb Plural Stimulus: The deer eat.
The dear eats.
The deer eat.
Stimulus: The sheep jump.
The sheep jump.
The sheep jumps Stimulus: The fish swim.
The fish swims.
The fish swim.
Stimulus: The deer stand.
The deer stands.
The deer stand.
Stimulus: The sheep eat.
The sheep eat.
The sheep eats.
36. Relative Pronouns with Double Function Stimulus: The girl who the boy girl who the boy is pushing is happy The girl who the boy is pushing is mad.
The boy who the girl is pushing is mad.
The girl who the boy is pushing is happy.
The boy who the girl is pushing is happy.
Stimulus: The boy who the girl is pulling is mad.
The boy who the girl is pulling is mad.
The boy who the girl is pulling is happy.
The girl who the boy is pulling is happy.
The girl who the boy is pulling is mad.
Stimulus: The girl who the clown is chasing, is chasing the boy.
The clown who the boy is chasing. is chasing the girl.
The girl who the clown is chasing, is chasine the boy.
5~ The clown who the girl is chasing, is chasing the boy.
SUBSTITUTE SHEET ZRULE 26~

Stimulus: The boy who the girl is pulling. is pulling the clown The girl who the boy is pulling, is pulling the clown.
The clown who the boy is pulling, is pulling the girl.
The clown who the _irl is pulling. is pulling the boy.
The boy who the girl is pulling, is pulling the clown.
Stimulus: The boy who the girl is pushing, is happy.
The boy who the girl is pushing is happy.
The girl who the boy is pushing is sad.
The boy who the girl is pushing is mad.
The girl who the boy is pushing is happy.
37. Obiect Relatives with Relativized Objects Stimulus: The girl is hugging the boy that the clown is kissing.
The girl is hugging the boy that is kissing The boy is hugging the girl that the clown The girl is hugging the boy that the clown The girl is hugging the clown that the boy Stimulus: The clown is hugging the girl that the boy is kissing The clown is hugging the boy that the girl is kissing.
The clown is hugging the girl that the boy is kissing.
The clown is hugging the girl that is kissing the boy.
The girl is hugging the clown that the boy is kissing.
Stimulus: The girl is kissing the clown that the boy is hugging The girl is kissing the boy that the clown is hugging.
The clown is kissing the girl that the boy is hugging.
The girl is kissing the clown that the boy is hugging.
The girl is kissing the clown that is hugging the boy.
Stimulus: The boy is hugging the girl that the clown is kissing The girl is hugging the boy that the clown is kissing.
The boy is hugging the girl that is kissing the clown.
The boy is hugging the clown that the girl is kissing.
The boy is hugging the girl that the clown is kissing.
Stimulus: The boy is kissing the girl that the clown is huggin~~
The boy is kissing the girl that is hugging the clown.
The girl is kissing the boy that the clown is hugging.
The boy is kissing the girl that the clown is hugging.
The boy is kissing the clown that the girl is hugging.
38. Cleftine Stimulus: lt's the clown that the girl chases.
It's the girl that the clown chases.
!t's the boy that the clown chases.
it's the boy that the girl chases.
It's the clown that the girl chases.
Stimulus: It's the boy that the girl kicks.
It's the clown that the bov kicks.
It's the clown that the sirl kicks.
It's the boy that the girl kicks.

SUBSTTTUTE SHEET (RULE Z6~

It's the girl that the boy kicks.
Stimulus: It's the girl that the boy pulls.
It's the clown that the boy pulls.
It's the girl that the boy pulls.
It's the boy that the girl pulls.
it's the clown that the girl pulls.
Stimulus: lt's the boy that the clown pushes.
It's the boy that the clown pushes.
It's the girl that the clown pushes.
It's the girl that the boy pushes.
It's the clown that the boy pushes.
Stimulus: It's the boy that the clown chases.
It's the girl that the boy chases.
It's the clown that the boy chases.
It's the boy that the clown chases.
It's the girl that the clown chases.
39. Negative-Passive Stimulus: The cat is not being outrun by the dog.
The cat is not being outrun by the dog.
The dog is not being outrun by the cat.
The boy is not being outrun by the cat.
Stimulus: The drawing is not to be received by the girl The drawing is not to be received by the girl.
The drawing is not to be received by the woman.
The drawing is not to be received by the girl.
Stimulus: The boy is not followed by the girl.
The girl is not followed by the boy.
The boy is not followed by the girl.
The clown is not followed by the boy.
Stimulus: The picture is not to be received from the boy The picture is not to be received from the bov.
The picture is not to be received from the man.
The picture is not to be received from the girl.
Stimulus: The clown is not led by the girl.
The clown is not led by the boy.
The girl is not led by the clown.
The clown is not led by the girl SUBSTITUTE SHEET (RULE 26~

Claims (50)

We claim:
1. A method for training the sensory perceptual system in a human, the method comprising:
a) repetitively providing a first acoustic event to the human, the first acoustic event being stretched in the time domain;
b) sequentially after a) providing a second acoustic event to the human for recognition, the second acoustic event being stretched in the time domain:
c) requiring the human to recognize the second acoustic event within a predetermined time window; and d) if the human recognizes the second acoustic event within the predetermined time window, reducing the amount that the first and second acoustic events are stretched.
2. The method of claim 1 wherein the sensory perceptual system that is being trained is the processing of temporal acoustic events common in speech.
3. The method of claim 2 wherein the temporal acoustic events common in speech are phonemes.
4. The method of claim 1 wherein the human has abnormal processing of temporal acoustic events.
5. The method of claim 1 wherein the human is a language-learning impaired (LLI) child.
6. The method of claim 1 wherein the first acoustic event is a distracter phoneme.
7. The method of claim 6 wherein the distracter phoneme is provided to the human a random number of times.
8. The method of claim 7 wherein the random number of times that the distracter phoneme is provided varies between 3 and 8.
9. The method of claim 1 wherein a) further comprises:
a1) separating each provision of the first acoustic event by a predetermined inter-stimulus interval (ISI).
10. The method of claim 9 wherein the ISI is initially set to 500ms.
11. The method of claim 9 wherein b) occurs after the same predetermined ISI.
12. The method of claim 1 wherein the second acoustic event is a target phoneme.
13. The method of claim 1 wherein both the first and second acoustic events are stretched in the time domain, without alteration of their frequency components.
14. The method of claim 1 wherein both the first and second acoustic events are phonemes that are initially stretched 150 percent in the time domain.
15. The method of claim 1 wherein c) further comprises:
C1) requiring the human to indicate recognition of the second acoustic event by release of a computer button.
16. The method of claim 1 wherein c) further comprises:
C1) requiring the human to indicate recognition of the second acoustic event by depressing a computer button.
17. The method of claim 1 wherein the predetermined time window is selected to accurately determine whether the human as distinguished between the repetitively provided first acoustic event and the second acoustic event.
18. The method of claim 1 further comprising:
e) reducing the amount that the first and second acoustic events are stretched to 125 percent of normal speech.
19. The method of claim 10 further comprising:
a2) progressively reducing the ISI between provisions of the first acoustic event and the second acoustic event as the human successfully recognizes the second acoustic event.
20. A method for training an LLI subject to distinguish between frequency sweeps common in phonemes, the method comprising:
a) presenting a first frequency sweep that increases in frequency;
b) presenting a second frequency sweep that decreases in frequency, the first and second frequency sweeps separated by an inter-stimulus interval (ISI);
c) wherein a) and b) occur in random order;
d) requiring an individual to recognize the order of presentation of the first and second frequency sweeps;
e) reducing or increasing the ISI separating the first and second frequency sweeps as the individual recognizes or fails to recognize the order of presentation, respectively; and f) reducing the duration of the first and second frequency sweeps as the individual repeatedly recognizes their order of presentation.
21. The method of claim 20 wherein the frequency sweeps common in phonemes are sweeps of approximately sixteen octaves per second.
22. The method of claim 21 wherein the frequency sweeps common in phonemes are at approximately 500hz. 1khz and 2khz.
23. The method of claim 20 wherein the first frequency sweep and the second frequency sweep, initially each have a duration of approximately 80ms.
24. The method of claim 20 wherein the first frequency sweep (F), and the second frequency sweep (S), are presented in the order: F-F, F-S, S-F or S-S.
25. The method of claim 20 wherein d) further comprises:
d1) presenting a first graphic image associated with the first frequency sweep;
d2) presenting a second graphic image associated with the second frequency sweep;
d3) requiring the individual to select the first graphic image and the second graphic image, as appropriate, to correspond to the order of presentation of the first and second frequency sweeps.
26. The method of claim 25 wherein the individual selects the first and second graphic images, as appropriate, by using a computer input device to designate selection.
27. The method of claim 20 wherein the individual recognizes the order of presentation of the first and second frequency sweeps by selecting a graphical representation associated with each of the sweeps on a computer.
28. The method of claim 20 wherein the ISI separating the first and second frequency sweeps is reduced when the individual properly recognizes their order of presentation in multiple trials.
29. The method of claim 28 wherein the number of multiple trials that must be properly recognized before the ISI is reduced is 3.
30. The method of claim 20 wherein the ISI separating the first and second frequency sweeps is increased when the individual fails to recognize their order of presentation after a single trial.
31. The method of claim 20 wherein the duration of the first and second frequency sweeps is reduced after the individual repeatedly recognizes their order of presentation with an ISI
that is common in speech.
32. The method of claim 31 wherein the ISI that is common in speech is less than 130ms.
33. The method of claim 31 wherein the ISI that is common in speech is between 110 and 125ms.
34. A method for repetitively and adaptively training a subject, having subnormal temporal acoustic processing capabilities, to distinguish between phonemes having similar acoustic characteristics, the method comprising:
a) providing a plurality of phoneme pairs, each pair having similar acoustic characteristics;
b) for each of the plurality of phoneme pairs, providing a pair of associated graphic images;
c) selecting from among the plurality of phoneme pairs. a phoneme pair to be presented to the subject;

d) processing the selected phoneme pair according to a predetermined skill level;
e) presenting to the subject the processed selected phoneme pair;
f) as a trial, requiring the subject to recognize one of the processed phonemes from the selected phoneme pair by selecting its associated graphic image; and g) repeating c) through f).
35. The method as recited in claim 34 wherein the phoneme pairs comprise: aba-ada; ba-da: be-de: bi-di;
and va-fa.
36. The method as recited in claim 34 wherein each pair of phonemes contain a distractor phoneme and a target phoneme.
37. The method as recited in claim 34 wherein the associated graphic images are animals.
38. The method as recited in claim 37 wherein the animals appear to speak the phonemes.
39. The method as recited in claim 34 wherein d) comprises;
d1) determining the skill level for the processing;
d2) stretching the consonant portions of each of the phonemes in the phoneme pair;
d3) emphasizing selected frequency envelopes within each of the phonemes in the phoneme pair;
d4) separating the stretched and emphasized phonemes by an inter-stimulus interval (ISI);
d5) wherein the amount of stretching, emphasizing and separating applied by d2) through d4) depends on the determined skill level.
40. The method as recited in claim 39 wherein d1) stretches the consonant portions of each of the phonemes in the time domain, without significantly affecting frequency components of the phonemes.
41. The method as recited in claim 39 wherein d1) stretches the consonant portions of each of the phonemes between 100 and 150 percent depending on the determined skill level.
42. The method as recited in claim 39 wherein the selected frequency envelopes within each of the phonemes are emphasized between 0 and 20dB depending on the determined skill level.
43. The method as recited in claim 39 wherein the stretched and emphasized phonemes are separated by an ISI of between 0 and 500ms depending on the determined skill level.
44. The method as recited in claim 39 wherein the determined skill level is related to whether the subject correctly recognized one of the processed phonemes from the selected phoneme pair in a previous trial.
45 The method as recited in claim 34 wherein e) comprises:
e1) presenting one of the processed phoneme pairs as a target phoneme to the subject;

e2) after e1), presenting each of the processed phonemes within the phoneme pair, in random order separated by an inter-stimulus interval (ISI); and e3) during presenting each of the processed phonemes, graphically associating the presenting of each of the processed phonemes with their associated graphic images.
46. The method as recited in claim 45 wherein the subject recognizes the target phoneme, after e2), by selecting its associated graphic image.
47. The method as recited in claim 39 wherein the method contains a plurality of skill levels ranging from 150 percent stretching, 20dB emphasis and 500ms ISI to 100 percent stretching, 0dB emphasis, and 0ms ISI.
48. The method as recited in claim 34 wherein the predetermined skill level is determined according to the ability of the subject to recognize one of the processed phonemes from the selected phoneme pair.
49. The method as recited in claim 48 wherein as the subject repeatedly recognizes one of the processed phonemes from the selected phoneme pair, the skill level advances in difficulty.
50. The method as recited in claim 48 wherein as the subject fails to recognize one of the processed phonemes from the selected phoneme pair, the skill level decreases in difficulty.
CA002281644A 1997-12-17 1998-12-14 Method and apparatus for training of sensory and perceptual systems in lli subjects Abandoned CA2281644A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US08/982,189 US5927988A (en) 1997-12-17 1997-12-17 Method and apparatus for training of sensory and perceptual systems in LLI subjects
US08/982,189 1997-12-17
PCT/US1998/026528 WO1999031640A1 (en) 1997-12-17 1998-12-14 Method and apparatus for training of sensory and perceptual systems in lli subjects

Publications (1)

Publication Number Publication Date
CA2281644A1 true CA2281644A1 (en) 1999-06-24

Family

ID=25528917

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002281644A Abandoned CA2281644A1 (en) 1997-12-17 1998-12-14 Method and apparatus for training of sensory and perceptual systems in lli subjects

Country Status (6)

Country Link
US (11) US5927988A (en)
EP (1) EP0963583B1 (en)
AU (1) AU753429B2 (en)
CA (1) CA2281644A1 (en)
DE (1) DE69815507T2 (en)
WO (1) WO1999031640A1 (en)

Families Citing this family (209)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6330426B2 (en) * 1994-05-23 2001-12-11 Stephen J. Brown System and method for remote education using a memory card
US20060287069A1 (en) * 1996-04-22 2006-12-21 Walker Jay S Method and system for adapting casino games to playing preferences
US7033276B2 (en) * 1996-04-22 2006-04-25 Walker Digital, Llc Method and system for adapting casino games to playing preferences
US7192352B2 (en) * 1996-04-22 2007-03-20 Walker Digital, Llc System and method for facilitating play of a video game via a web site
US20050032211A1 (en) * 1996-09-26 2005-02-10 Metabogal Ltd. Cell/tissue culturing device, system and method
US6109107A (en) * 1997-05-07 2000-08-29 Scientific Learning Corporation Method and apparatus for diagnosing and remediating language-based learning impairments
US5927988A (en) * 1997-12-17 1999-07-27 Jenkins; William M. Method and apparatus for training of sensory and perceptual systems in LLI subjects
US6159014A (en) * 1997-12-17 2000-12-12 Scientific Learning Corp. Method and apparatus for training of cognitive and memory systems in humans
US6120298A (en) * 1998-01-23 2000-09-19 Scientific Learning Corp. Uniform motivation for multiple computer-assisted training systems
US8202094B2 (en) * 1998-02-18 2012-06-19 Radmila Solutions, L.L.C. System and method for training users with audible answers to spoken questions
US6146147A (en) * 1998-03-13 2000-11-14 Cognitive Concepts, Inc. Interactive sound awareness skills improvement system and method
GB2338333B (en) * 1998-06-09 2003-02-26 Aubrey Nunes Computer assisted learning system
US6882824B2 (en) 1998-06-10 2005-04-19 Leapfrog Enterprises, Inc. Interactive teaching toy
US6801751B1 (en) * 1999-11-30 2004-10-05 Leapfrog Enterprises, Inc. Interactive learning appliance
US6321226B1 (en) * 1998-06-30 2001-11-20 Microsoft Corporation Flexible keyboard searching
US6178395B1 (en) * 1998-09-30 2001-01-23 Scientific Learning Corporation Systems and processes for data acquisition of location of a range of response time
US6511324B1 (en) * 1998-10-07 2003-01-28 Cognitive Concepts, Inc. Phonological awareness, phonological processing, and reading skill training system and method
US6289310B1 (en) * 1998-10-07 2001-09-11 Scientific Learning Corp. Apparatus for enhancing phoneme differences according to acoustic processing profile for language learning impaired subject
US6036496A (en) * 1998-10-07 2000-03-14 Scientific Learning Corporation Universal screen for language learning impaired subjects
US6299452B1 (en) * 1999-07-09 2001-10-09 Cognitive Concepts, Inc. Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
EP1210005A4 (en) * 1999-08-13 2009-03-04 Synaptec L L C Method and apparatus of enhancing learning capacity
JP2001060058A (en) * 1999-08-23 2001-03-06 Matsushita Electric Ind Co Ltd Learning supporting device, learning supporting method, and recording medium recorded with its program
US6755657B1 (en) 1999-11-09 2004-06-29 Cognitive Concepts, Inc. Reading and spelling skill diagnosis and training system and method
US9520069B2 (en) 1999-11-30 2016-12-13 Leapfrog Enterprises, Inc. Method and system for providing content for learning appliances over an electronic communication medium
US9640083B1 (en) 2002-02-26 2017-05-02 Leapfrog Enterprises, Inc. Method and system for providing content for learning appliances over an electronic communication medium
US6654748B1 (en) 1999-12-07 2003-11-25 Rwd Technologies, Inc. Dynamic application browser and database for use therewith
US6876758B1 (en) * 1999-12-27 2005-04-05 Neuro Vision, Inc. Methods and systems for improving a user's visual perception over a communications network
US6615197B1 (en) * 2000-03-13 2003-09-02 Songhai Chai Brain programmer for increasing human information processing capacity
AU2001257233A1 (en) * 2000-04-26 2001-11-07 Jrl Enterprises, Inc. An interactive, computer-aided speech education method and apparatus
BR0114992A (en) * 2000-10-20 2004-02-10 Eyehear Learning Inc Language learning device and system
AU2002239696A1 (en) * 2000-11-08 2002-05-27 Cognitive Concepts, Inc. Reading and spelling skill diagnosis and training system and method
US6632094B1 (en) * 2000-11-10 2003-10-14 Readingvillage.Com, Inc. Technique for mentoring pre-readers and early readers
US6823312B2 (en) * 2001-01-18 2004-11-23 International Business Machines Corporation Personalized system for providing improved understandability of received speech
US6732076B2 (en) 2001-01-25 2004-05-04 Harcourt Assessment, Inc. Speech analysis and therapy system and method
US6725198B2 (en) 2001-01-25 2004-04-20 Harcourt Assessment, Inc. Speech analysis system and method
GB2371913A (en) * 2001-02-01 2002-08-07 Univ Oxford Frequency discrimination training, e.g. for phonemes or tones.
WO2002063525A1 (en) * 2001-02-08 2002-08-15 Jong-Hae Kim The method of education and scholastic management for cyber education system utilizing internet
US6730041B2 (en) * 2001-04-18 2004-05-04 Diane Dietrich Learning disabilities diagnostic system
KR100705309B1 (en) * 2001-06-13 2007-04-10 가부시키가이샤 노바 Conversation power testing system
DE60220794T2 (en) * 2001-06-21 2008-03-06 Koninklijke Philips Electronics N.V. METHOD FOR TRAINING A CUSTOMER-ORIENTED APPLICATION DEVICE THROUGH LANGUAGE INPUTS, WITH PROGRESSING OUT BY AN ANIMATED CHARACTER WITH DIFFERENT TIRED STATE, WHICH ARE EACH ASSIGNED TO PROGRESS, AND DEVICE FOR CARRYING OUT THE PROCESS
EP1535392A4 (en) * 2001-07-18 2009-09-16 Wireless Generation Inc System and method for real-time observation assessment
US8956164B2 (en) 2001-08-02 2015-02-17 Interethnic, Llc Method of teaching reading and spelling with a progressive interlocking correlative system
US6984176B2 (en) * 2001-09-05 2006-01-10 Pointstreak.Com Inc. System, methodology, and computer program for gathering hockey and hockey-type game data
US20050196732A1 (en) * 2001-09-26 2005-09-08 Scientific Learning Corporation Method and apparatus for automated training of language learning skills
US20030091965A1 (en) * 2001-11-09 2003-05-15 Kuang-Shin Lin Step-by-step english teaching method and its computer accessible recording medium
US9852649B2 (en) * 2001-12-13 2017-12-26 Mind Research Institute Method and system for teaching vocabulary
US7052277B2 (en) * 2001-12-14 2006-05-30 Kellman A.C.T. Services, Inc. System and method for adaptive learning
US6877989B2 (en) * 2002-02-15 2005-04-12 Psychological Dataccorporation Computer program for generating educational and psychological test items
US20030170596A1 (en) * 2002-03-07 2003-09-11 Blank Marion S. Literacy system
US8210850B2 (en) * 2002-03-07 2012-07-03 Blank Marion S Literacy education system for students with autistic spectrum disorders (ASD)
US8128406B2 (en) * 2002-03-15 2012-03-06 Wake Forest University Predictive assessment of reading
AU2003230946A1 (en) 2002-04-19 2003-11-03 Walker Digital, Llc Method and apparatus for linked play gaming with combined outcomes and shared indicia
US20030223095A1 (en) * 2002-05-28 2003-12-04 Kogod Robert P. Symbol message displays
US20040014021A1 (en) * 2002-07-17 2004-01-22 Iyad Suleiman Apparatus and method for evaluating school readiness
US7309315B2 (en) * 2002-09-06 2007-12-18 Epoch Innovations, Ltd. Apparatus, method and computer program product to facilitate ordinary visual perception via an early perceptual-motor extraction of relational information from a light stimuli array to trigger an overall visual-sensory motor integration in a subject
US8491311B2 (en) * 2002-09-30 2013-07-23 Mind Research Institute System and method for analysis and feedback of student performance
US7752045B2 (en) * 2002-10-07 2010-07-06 Carnegie Mellon University Systems and methods for comparing speech elements
US7645140B2 (en) * 2002-11-05 2010-01-12 University Of Rochester Medical Center Method for assessing navigational capacity
US6808392B1 (en) 2002-11-27 2004-10-26 Donna L. Walton System and method of developing a curriculum for stimulating cognitive processing
JP4181869B2 (en) * 2002-12-19 2008-11-19 裕 力丸 Diagnostic equipment
US7951557B2 (en) 2003-04-27 2011-05-31 Protalix Ltd. Human lysosomal proteins from plant cell culture
US20100196345A1 (en) * 2003-04-27 2010-08-05 Protalix Production of high mannose proteins in plant culture
US20050090372A1 (en) * 2003-06-24 2005-04-28 Mark Burrows Method and system for using a database containing rehabilitation plans indexed across multiple dimensions
WO2005002431A1 (en) * 2003-06-24 2005-01-13 Johnson & Johnson Consumer Companies Inc. Method and system for rehabilitating a medical condition across multiple dimensions
US7533411B2 (en) * 2003-09-23 2009-05-12 Microsoft Corporation Order-based human interactive proofs (HIPs) and automatic difficulty rating of HIPs
US20050153263A1 (en) * 2003-10-03 2005-07-14 Scientific Learning Corporation Method for developing cognitive skills in reading
US8529270B2 (en) * 2003-12-12 2013-09-10 Assessment Technology, Inc. Interactive computer system for instructor-student teaching and assessment of preschool children
US20050142522A1 (en) * 2003-12-31 2005-06-30 Kullok Jose R. System for treating disabilities such as dyslexia by enhancing holistic speech perception
US20050153267A1 (en) * 2004-01-13 2005-07-14 Neuroscience Solutions Corporation Rewards method and apparatus for improved neurological training
US20070111173A1 (en) * 2004-01-13 2007-05-17 Posit Science Corporation Method for modulating listener attention toward synthetic formant transition cues in speech stimuli for training
US20060177805A1 (en) * 2004-01-13 2006-08-10 Posit Science Corporation Method for enhancing memory and cognition in aging adults
US20070065789A1 (en) * 2004-01-13 2007-03-22 Posit Science Corporation Method for enhancing memory and cognition in aging adults
US20060051727A1 (en) * 2004-01-13 2006-03-09 Posit Science Corporation Method for enhancing memory and cognition in aging adults
US20050175972A1 (en) * 2004-01-13 2005-08-11 Neuroscience Solutions Corporation Method for enhancing memory and cognition in aging adults
US20060105307A1 (en) * 2004-01-13 2006-05-18 Posit Science Corporation Method for enhancing memory and cognition in aging adults
US8210851B2 (en) * 2004-01-13 2012-07-03 Posit Science Corporation Method for modulating listener attention toward synthetic formant transition cues in speech stimuli for training
US20060073452A1 (en) * 2004-01-13 2006-04-06 Posit Science Corporation Method for enhancing memory and cognition in aging adults
US20070020595A1 (en) * 2004-01-13 2007-01-25 Posit Science Corporation Method for enhancing memory and cognition in aging adults
US20050191603A1 (en) * 2004-02-26 2005-09-01 Scientific Learning Corporation Method and apparatus for automated training of language learning skills
US20060093997A1 (en) * 2004-06-12 2006-05-04 Neurotone, Inc. Aural rehabilitation system and a method of using the same
WO2005124722A2 (en) * 2004-06-12 2005-12-29 Spl Development, Inc. Aural rehabilitation system and method
WO2005125281A1 (en) * 2004-06-14 2005-12-29 Johnson & Johnson Consumer Companies, Inc. System for and method of optimizing an individual’s hearing aid
EP1767060A4 (en) * 2004-06-14 2009-07-29 Johnson & Johnson Consumer At-home hearing aid training system and method
WO2005125275A2 (en) * 2004-06-14 2005-12-29 Johnson & Johnson Consumer Companies, Inc. System for optimizing hearing within a place of business
EP1767053A4 (en) * 2004-06-14 2009-07-01 Johnson & Johnson Consumer System for and method of increasing convenience to users to drive the purchase process for hearing health that results in purchase of a hearing aid
EP1765153A4 (en) * 2004-06-14 2009-07-22 Johnson & Johnson Consumer A sytem for and method of conveniently and automatically testing the hearing of a person
WO2005125276A1 (en) * 2004-06-14 2005-12-29 Johnson & Johnson Consumer Companies, Inc. At-home hearing aid testing and cleaning system
WO2005125279A1 (en) * 2004-06-14 2005-12-29 Johnson & Johnson Consumer Companies, Inc. Hearing device sound simulation system and method of using the system
EP1769412A4 (en) * 2004-06-14 2010-03-31 Johnson & Johnson Consumer Audiologist equipment interface user database for providing aural rehabilitation of hearing loss across multiple dimensions of hearing
EP1767061A4 (en) * 2004-06-15 2009-11-18 Johnson & Johnson Consumer Low-cost, programmable, time-limited hearing aid apparatus, method of use and system for programming same
US8597101B2 (en) * 2004-06-23 2013-12-03 Igt Video content determinative keno game system and method
US7850518B2 (en) * 2004-06-23 2010-12-14 Walker Digital, Llc Video content determinative Keno game system and method
US20050288096A1 (en) * 2004-06-23 2005-12-29 Walker Digital, Llc Methods and apparatus for facilitating a payout at a gaming device using audio / video content
WO2006007632A1 (en) * 2004-07-16 2006-01-26 Era Centre Pty Ltd A method for diagnostic home testing of hearing impairment, and related developmental problems in infants, toddlers, and children
US9355651B2 (en) 2004-09-16 2016-05-31 Lena Foundation System and method for expressive language, developmental disorder, and emotion assessment
US9240188B2 (en) 2004-09-16 2016-01-19 Lena Foundation System and method for expressive language, developmental disorder, and emotion assessment
US8938390B2 (en) * 2007-01-23 2015-01-20 Lena Foundation System and method for expressive language and developmental disorder assessment
US10223934B2 (en) 2004-09-16 2019-03-05 Lena Foundation Systems and methods for expressive language, developmental disorder, and emotion assessment, and contextual feedback
US8460108B2 (en) * 2005-02-25 2013-06-11 Microsoft Corporation Computerized method and system for generating a gaming experience in a networked environment
US20070015121A1 (en) * 2005-06-02 2007-01-18 University Of Southern California Interactive Foreign Language Teaching
US10699593B1 (en) * 2005-06-08 2020-06-30 Pearson Education, Inc. Performance support integration with E-learning system
AU2006265799A1 (en) * 2005-07-01 2007-01-11 Gary Mcnabb Method, system and apparatus for entraining global regulatory bio-networks to evoke optimized self-organizing autonomous adaptive capacities
US20070017351A1 (en) * 2005-07-20 2007-01-25 Acoustic Learning, Inc. Musical absolute pitch recognition instruction system and method
US20070046678A1 (en) * 2005-09-01 2007-03-01 Peterson Matthew R System and method for training with a virtual apparatus
US7295201B2 (en) * 2005-09-29 2007-11-13 General Electronic Company Method and system for generating automated exploded views
CN1963887A (en) * 2005-11-11 2007-05-16 王薇茜 Self-help language study system comply to speech sense
US20070134633A1 (en) * 2005-12-13 2007-06-14 Posit Science Corporation Assessment in cognitive training exercises
US20070134636A1 (en) * 2005-12-13 2007-06-14 Posit Science Corporation Cognitive training using a maximum likelihood assessment procedure
US20070134635A1 (en) * 2005-12-13 2007-06-14 Posit Science Corporation Cognitive training using formant frequency sweeps
US20070134632A1 (en) * 2005-12-13 2007-06-14 Posit Science Corporation Assessment in cognitive training exercises
US20070134631A1 (en) * 2005-12-13 2007-06-14 Posit Science Corporation Progressions in HiFi assessments
US20070134634A1 (en) * 2005-12-13 2007-06-14 Posit Science Corporation Assessment in cognitive training exercises
US20070141541A1 (en) * 2005-12-13 2007-06-21 Posit Science Corporation Assessment in cognitive training exercises
US8215961B2 (en) * 2005-12-15 2012-07-10 Posit Science Corporation Cognitive training using visual sweeps
US8197258B2 (en) * 2005-12-15 2012-06-12 Posit Science Corporation Cognitive training using face-name associations
US20070166676A1 (en) * 2005-12-15 2007-07-19 Posit Science Corporation Cognitive training using guided eye movements
US20070218439A1 (en) * 2005-12-15 2007-09-20 Posit Science Corporation Cognitive training using visual searches
US20070218440A1 (en) * 2005-12-15 2007-09-20 Posit Science Corporation Cognitive training using multiple object tracking
US20070166675A1 (en) * 2005-12-15 2007-07-19 Posit Science Corporation Cognitive training using visual stimuli
US20070184418A1 (en) * 2006-02-07 2007-08-09 Yi-Ming Tseng Method for computer-assisted learning
US8197324B2 (en) 2006-03-23 2012-06-12 Walker Digital, Llc Content determinative game systems and methods for keno and lottery games
US20080213734A1 (en) * 2006-04-02 2008-09-04 Steve George Guide Method for Decoding Pictographic Signs Present on Ancient Artifacts
US7664717B2 (en) * 2006-06-09 2010-02-16 Scientific Learning Corporation Method and apparatus for building skills in accurate text comprehension and use of comprehension strategies
US20070298384A1 (en) * 2006-06-09 2007-12-27 Scientific Learning Corporation Method and apparatus for building accuracy and fluency in recognizing and constructing sentence structures
US20070298383A1 (en) * 2006-06-09 2007-12-27 Scientific Learning Corporation Method and apparatus for building accuracy and fluency in phonemic analysis, decoding, and spelling skills
US20070298385A1 (en) * 2006-06-09 2007-12-27 Scientific Learning Corporation Method and apparatus for building skills in constructing and organizing multiple-paragraph stories and expository passages
US7933852B2 (en) 2006-06-09 2011-04-26 Scientific Learning Corporation Method and apparatus for developing cognitive skills
US20080003553A1 (en) * 2006-06-14 2008-01-03 Roger Stark Cognitive learning video game
US20080038706A1 (en) * 2006-08-08 2008-02-14 Siemens Medical Solutions Usa, Inc. Automated User Education and Training Cognitive Enhancement System
US8771891B2 (en) * 2006-08-15 2014-07-08 GM Global Technology Operations LLC Diagnostic system for unbalanced motor shafts for high speed compressor
US7773097B2 (en) * 2006-10-05 2010-08-10 Posit Science Corporation Visual emphasis for cognitive training exercises
US7540615B2 (en) * 2006-12-15 2009-06-02 Posit Science Corporation Cognitive training using guided eye movements
US20080161080A1 (en) * 2006-12-29 2008-07-03 Nokia Corporation Systems, methods, devices, and computer program products providing a brain-exercising game
US20080160487A1 (en) * 2006-12-29 2008-07-03 Fairfield Language Technologies Modularized computer-aided language learning method and system
EP2126901B1 (en) * 2007-01-23 2015-07-01 Infoture, Inc. System for analysis of speech
WO2008101245A2 (en) * 2007-02-17 2008-08-21 Bradley University Universal learning system
SI2150608T1 (en) * 2007-05-07 2018-04-30 Protalix Ltd. Large scale disposable bioreactor
US20080288866A1 (en) * 2007-05-17 2008-11-20 Spencer James H Mobile device carrousel systems and methods
US8147322B2 (en) 2007-06-12 2012-04-03 Walker Digital, Llc Multiplayer gaming device and methods
US8622831B2 (en) 2007-06-21 2014-01-07 Microsoft Corporation Responsive cutscenes in video games
WO2009006433A1 (en) * 2007-06-29 2009-01-08 Alelo, Inc. Interactive language pronunciation teaching
US8630577B2 (en) * 2007-08-07 2014-01-14 Assessment Technology Incorporated Item banking system for standards-based assessment
US20130281879A1 (en) * 2007-10-31 2013-10-24 First Principles, Inc. Determination of whether a luciferian can be rehabilitated
US8851895B1 (en) * 2008-01-29 2014-10-07 Elizabeth M. Morrison Method and system for teaching and practicing articulation of targeted phonemes
US20090226865A1 (en) * 2008-03-10 2009-09-10 Anat Thieberger Ben-Haim Infant photo to improve infant-directed speech recordings
US20100068683A1 (en) * 2008-09-16 2010-03-18 Treasure Bay, Inc. Devices and methods for improving reading skills
US9713444B2 (en) * 2008-09-23 2017-07-25 Digital Artefacts, Llc Human-digital media interaction tracking
US20100092930A1 (en) * 2008-10-15 2010-04-15 Martin Fletcher System and method for an interactive storytelling game
US20100092933A1 (en) * 2008-10-15 2010-04-15 William Kuchera System and method for an interactive phoneme video game
US20100105015A1 (en) * 2008-10-23 2010-04-29 Judy Ravin System and method for facilitating the decoding or deciphering of foreign accents
US8494857B2 (en) 2009-01-06 2013-07-23 Regents Of The University Of Minnesota Automatic measurement of speech fluency
JP2012516463A (en) * 2009-01-31 2012-07-19 ドッド、エンダ、パトリック Computer execution method
US8360783B2 (en) * 2009-04-16 2013-01-29 Robert Lombard Aural, neural muscle memory response tool and method
US20110053123A1 (en) * 2009-08-31 2011-03-03 Christopher John Lonsdale Method for teaching language pronunciation and spelling
US20110087130A1 (en) * 2009-10-08 2011-04-14 Systems N' Solutions Ltd. Method and system for diagnosing and treating auditory processing disorders
US20110195390A1 (en) * 2010-01-08 2011-08-11 Rebecca Kopriva Methods and Systems of Communicating Academic Meaning and Evaluating Cognitive Abilities in Instructional and Test Settings
US20110200978A1 (en) * 2010-02-16 2011-08-18 Assessment Technology Incorporated Online instructional dialog books
WO2011106364A2 (en) * 2010-02-23 2011-09-01 Farmacia Electronica, Inc. Method and system for consumer-specific communication based on cultural normalization techniques
US20120094263A1 (en) * 2010-10-19 2012-04-19 The Regents Of The University Of California Video games for training sensory and perceptual skills
US20120122066A1 (en) * 2010-11-15 2012-05-17 Age Of Learning, Inc. Online immersive and interactive educational system
US8727781B2 (en) 2010-11-15 2014-05-20 Age Of Learning, Inc. Online educational system with multiple navigational modes
US9324240B2 (en) 2010-12-08 2016-04-26 Age Of Learning, Inc. Vertically integrated mobile educational system
KR101522837B1 (en) * 2010-12-16 2015-05-26 한국전자통신연구원 Communication method and system for the same
US20120158468A1 (en) * 2010-12-20 2012-06-21 Wheeler Clair F Method Of Providing Access Management In An Electronic Apparatus
US9691289B2 (en) * 2010-12-22 2017-06-27 Brightstar Learning Monotonous game-like task to promote effortless automatic recognition of sight words
US10019995B1 (en) 2011-03-01 2018-07-10 Alice J. Stiebel Methods and systems for language learning based on a series of pitch patterns
US11062615B1 (en) 2011-03-01 2021-07-13 Intelligibility Training LLC Methods and systems for remote language learning in a pandemic-aware world
WO2012177976A2 (en) 2011-06-22 2012-12-27 Massachusetts Eye & Ear Infirmary Auditory stimulus for auditory rehabilitation
CN103827962B (en) * 2011-09-09 2016-12-07 旭化成株式会社 Voice recognition device
US8784108B2 (en) 2011-11-21 2014-07-22 Age Of Learning, Inc. Computer-based language immersion teaching for young learners
US8731454B2 (en) 2011-11-21 2014-05-20 Age Of Learning, Inc. E-learning lesson delivery platform
WO2013078147A1 (en) * 2011-11-21 2013-05-30 Age Of Learning, Inc. Language phoneme practice engine
US9058751B2 (en) * 2011-11-21 2015-06-16 Age Of Learning, Inc. Language phoneme practice engine
US8740620B2 (en) 2011-11-21 2014-06-03 Age Of Learning, Inc. Language teaching system that facilitates mentor involvement
US9576593B2 (en) 2012-03-15 2017-02-21 Regents Of The University Of Minnesota Automated verbal fluency assessment
BR112014027594A2 (en) * 2012-05-09 2017-06-27 Koninklijke Philips Nv device for supporting a person's behavior change, method for supporting a person's behavior change using a computer device and program
US9430776B2 (en) 2012-10-25 2016-08-30 Google Inc. Customized E-books
US9009028B2 (en) 2012-12-14 2015-04-14 Google Inc. Custom dictionaries for E-books
US9601026B1 (en) 2013-03-07 2017-03-21 Posit Science Corporation Neuroplasticity games for depression
USD738889S1 (en) 2013-06-09 2015-09-15 Apple Inc. Display screen or portion thereof with animated graphical user interface
ES2525766B1 (en) * 2013-06-24 2015-10-13 Linguaversal S.L. System and method to improve the perception of the sounds of a foreign language
US20150031003A1 (en) * 2013-07-24 2015-01-29 Aspen Performance Technologies Neuroperformance
US20150031426A1 (en) * 2013-07-25 2015-01-29 Ross Alloway Visual Information Targeting Game
US20150037770A1 (en) * 2013-08-01 2015-02-05 Steven Philp Signal processing system for comparing a human-generated signal to a wildlife call signal
USD747344S1 (en) 2013-08-02 2016-01-12 Apple Inc. Display screen with graphical user interface
US9795892B2 (en) 2013-09-30 2017-10-24 Thoughtfull Toys, Inc. Toy car apparatus
CN104050838B (en) * 2014-07-15 2016-06-08 北京网梯科技发展有限公司 A kind of point-of-reading system, equipment and method that can identify the common printed thing with reading
USD760759S1 (en) 2014-09-01 2016-07-05 Apple Inc. Display screen or portion thereof with graphical user interface
USD760267S1 (en) 2015-06-04 2016-06-28 Apple Inc. Display screen or portion thereof with graphical user interface
USD786919S1 (en) * 2015-08-12 2017-05-16 Samsung Electronics Co., Ltd. Display screen or portion thereof with animated graphical user interface
US20170116886A1 (en) * 2015-10-23 2017-04-27 Regents Of The University Of California Method and system for training with frequency modulated sounds to enhance hearing
USD799545S1 (en) 2016-05-18 2017-10-10 Apple Inc. Display screen or portion thereof with icon
USD814515S1 (en) 2016-06-10 2018-04-03 Apple Inc. Display screen or portion thereof with icon
EP3392884A1 (en) * 2017-04-21 2018-10-24 audEERING GmbH A method for automatic affective state inference and an automated affective state inference system
CN108805253B (en) * 2017-04-28 2021-03-02 普天信息技术有限公司 PM2.5 concentration prediction method
USD818494S1 (en) 2017-06-04 2018-05-22 Apple Inc. Display screen or portion thereof with animated graphical user interface
US11033453B1 (en) 2017-06-28 2021-06-15 Bertec Corporation Neurocognitive training system for improving visual motor responses
US11712162B1 (en) 2017-06-28 2023-08-01 Bertec Corporation System for testing and/or training the vision of a user
US11337606B1 (en) 2017-06-28 2022-05-24 Bertec Corporation System for testing and/or training the vision of a user
US20190012615A1 (en) * 2017-07-10 2019-01-10 Broker Genius, Inc. System and Apparatus for the Display and Selection of Listings and Splits
US11288976B2 (en) 2017-10-05 2022-03-29 Fluent Forever Inc. Language fluency system
US11508479B2 (en) * 2017-10-16 2022-11-22 Optum, Inc. Automated question generation and response tracking
WO2019113477A1 (en) 2017-12-07 2019-06-13 Lena Foundation Systems and methods for automatic determination of infant cry and discrimination of cry from fussiness
USD861014S1 (en) 2018-03-15 2019-09-24 Apple Inc. Electronic device with graphical user interface
WO2019193547A1 (en) 2018-04-05 2019-10-10 Cochlear Limited Advanced hearing prosthesis recipient habilitation and/or rehabilitation
US11756691B2 (en) 2018-08-01 2023-09-12 Martin Reimann Brain health comparison system
US11219293B2 (en) * 2019-02-06 2022-01-11 Soniagenix Exfoliating and nourishing applicator
USD951978S1 (en) 2020-06-21 2022-05-17 Apple Inc. Display screen or portion thereof with graphical user interface
USD1006036S1 (en) 2021-04-06 2023-11-28 Apple Inc. Display or portion thereof with icon

Family Cites Families (97)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4010557A (en) * 1968-04-05 1977-03-08 D. H. Baldwin Company Music laboratory
US3816664A (en) * 1971-09-28 1974-06-11 R Koch Signal compression and expansion apparatus with means for preserving or varying pitch
IT995101B (en) * 1972-07-31 1975-11-10 Beller Isi APPARATUS FOR THE TREATMENT OF SPOKEN AND WRITTEN LANGUAGE DISORDERS
US4128737A (en) * 1976-08-16 1978-12-05 Federal Screw Works Voice synthesizer
US4569026A (en) * 1979-02-05 1986-02-04 Best Robert M TV Movies that talk back
JPS5853787B2 (en) * 1979-08-30 1983-12-01 シャープ株式会社 electronic dictionary
CA1149050A (en) * 1980-02-08 1983-06-28 Alfred A.A.A. Tomatis Apparatus for conditioning hearing
US4464119A (en) * 1981-11-10 1984-08-07 Vildgrube Georgy S Method and device for correcting speech
US4746991A (en) * 1982-01-12 1988-05-24 Discovision Associates Recording characteristic evaluation of a recording medium with a time sequence of test signals
NL8202318A (en) * 1982-06-09 1984-01-02 Koninkl Philips Electronics Nv SYSTEM FOR THE TRANSFER OF VOICE OVER A DISTURBED TRANSMISSION.
US4641343A (en) * 1983-02-22 1987-02-03 Iowa State University Research Foundation, Inc. Real time speech formant analyzer and display
JPS59226400A (en) * 1983-06-07 1984-12-19 松下電器産業株式会社 Voice recognition equipment
US4696042A (en) * 1983-11-03 1987-09-22 Texas Instruments Incorporated Syllable boundary recognition from phonological linguistic unit string data
US4799261A (en) * 1983-11-03 1989-01-17 Texas Instruments Incorporated Low data rate speech encoding employing syllable duration patterns
EP0143623A3 (en) * 1983-11-25 1987-09-23 Mars Incorporated Automatic test equipment
FR2568437B1 (en) * 1984-07-27 1988-10-14 Isi Beller AUDIO FREQUENCY CONVERTER APPARATUS, INSTALLATION FOR THE TREATMENT OF SUBJECTS WITH AUDIO-PHONATORY AND AUDITIVO-VERBAL DISORDERS INCLUDING SUCH APPARATUS AND METHOD USING SUCH AN INSTALLATION
US4821325A (en) * 1984-11-08 1989-04-11 American Telephone And Telegraph Company, At&T Bell Laboratories Endpoint detector
US4586905A (en) * 1985-03-15 1986-05-06 Groff James W Computer-assisted audio/visual teaching system
CA1243779A (en) * 1985-03-20 1988-10-25 Tetsu Taguchi Speech processing system
US4689553A (en) * 1985-04-12 1987-08-25 Jodon Engineering Associates, Inc. Method and system for monitoring position of a fluid actuator employing microwave resonant cavity principles
US4820059A (en) * 1985-10-30 1989-04-11 Central Institute For The Deaf Speech processing apparatus and methods
JPS63501603A (en) * 1985-10-30 1988-06-16 セントラル インステイチユ−ト フオ ザ デフ Speech processing device and method
US5697844A (en) * 1986-03-10 1997-12-16 Response Reward Systems, L.C. System and method for playing games and rewarding successful players
US4852168A (en) * 1986-11-18 1989-07-25 Sprague Richard P Compression of stored waveforms for artificial speech
US4876737A (en) * 1986-11-26 1989-10-24 Microdyne Corporation Satellite data transmission and receiving station
US4884972A (en) * 1986-11-26 1989-12-05 Bright Star Technology, Inc. Speech synchronized animation
US4852170A (en) * 1986-12-18 1989-07-25 R & D Associates Real time computer speech recognition system
JP2558682B2 (en) * 1987-03-13 1996-11-27 株式会社東芝 Intellectual work station
GB8720387D0 (en) * 1987-08-28 1987-10-07 British Telecomm Matching vectors
US4980917A (en) * 1987-11-18 1990-12-25 Emerson & Stern Associates, Inc. Method and apparatus for determining articulatory parameters from speech data
JP2791036B2 (en) * 1988-04-23 1998-08-27 キヤノン株式会社 Audio processing device
US5010495A (en) * 1989-02-02 1991-04-23 American Language Academy Interactive language learning system
NL8901985A (en) * 1989-08-01 1991-03-01 Nl Stichting Voor Het Dove En METHOD AND APPARATUS FOR SCREENING THE HEARING OF A YOUNG CHILD
DE3931638A1 (en) * 1989-09-22 1991-04-04 Standard Elektrik Lorenz Ag METHOD FOR SPEAKER ADAPTIVE RECOGNITION OF LANGUAGE
DE69024919T2 (en) * 1989-10-06 1996-10-17 Matsushita Electric Ind Co Ltd Setup and method for changing speech speed
US5169342A (en) * 1990-05-30 1992-12-08 Steele Richard D Method of communicating with a language deficient patient
JP2609752B2 (en) * 1990-10-09 1997-05-14 三菱電機株式会社 Voice / in-band data identification device
US5261820A (en) * 1990-12-21 1993-11-16 Dynamix, Inc. Computer simulation playback method and simulation
US5215468A (en) * 1991-03-11 1993-06-01 Lauffer Martha A Method and apparatus for introducing subliminal changes to audio stimuli
US5751927A (en) * 1991-03-26 1998-05-12 Wason; Thomas D. Method and apparatus for producing three dimensional displays on a two dimensional surface
US5303327A (en) * 1991-07-02 1994-04-12 Duke University Communication test system
US5305420A (en) * 1991-09-25 1994-04-19 Nippon Hoso Kyokai Method and apparatus for hearing assistance with speech speed control function
US5214708A (en) * 1991-12-16 1993-05-25 Mceachern Robert H Speech information extractor
US5231568A (en) * 1992-01-16 1993-07-27 Impact Telemedia, Inc. Promotional game method and apparatus therefor
FR2686442B1 (en) * 1992-01-21 1994-04-29 Beller Isi IMPROVED AUDIO FREQUENCY CONVERTER APPARATUS, INSTALLATION FOR THE TREATMENT OF SUBJECTS INCLUDING SUCH APPARATUS AND METHOD USING SUCH AN INSTALLATION.
US5285498A (en) * 1992-03-02 1994-02-08 At&T Bell Laboratories Method and apparatus for coding audio signals based on perceptual model
US5289521A (en) * 1992-03-09 1994-02-22 Coleman Michael J Audio/telecommunications system to assist in speech and cognitive skills development for the verbally handicapped
US5692906A (en) * 1992-04-01 1997-12-02 Corder; Paul R. Method of diagnosing and remediating a deficiency in communications skills
US5302132A (en) * 1992-04-01 1994-04-12 Corder Paul R Instructional system and method for improving communication skills
US5393236A (en) * 1992-09-25 1995-02-28 Northeastern University Interactive speech pronunciation apparatus and method
GB9223066D0 (en) * 1992-11-04 1992-12-16 Secr Defence Children's speech training aid
US5428707A (en) * 1992-11-13 1995-06-27 Dragon Systems, Inc. Apparatus and methods for training speech recognition systems and their users and otherwise improving speech recognition performance
US5353011A (en) * 1993-01-04 1994-10-04 Checkpoint Systems, Inc. Electronic article security system with digital signal processing and increased detection range
US5487671A (en) * 1993-01-21 1996-01-30 Dsp Solutions (International) Computerized system for teaching speech
US5562453A (en) * 1993-02-02 1996-10-08 Wen; Sheree H.-R. Adaptive biofeedback speech tutor toy
US5336093A (en) * 1993-02-22 1994-08-09 Cox Carla H Reading instructions method for disabled readers
DE69420955T2 (en) * 1993-03-26 2000-07-13 British Telecomm CONVERTING TEXT IN SIGNAL FORMS
US5860064A (en) * 1993-05-13 1999-01-12 Apple Computer, Inc. Method and apparatus for automatic generation of vocal emotion in a synthetic text-to-speech system
US5421731A (en) * 1993-05-26 1995-06-06 Walker; Susan M. Method for teaching reading and spelling
US5340316A (en) * 1993-05-28 1994-08-23 Panasonic Technologies, Inc. Synthesis-based speech training system
US5536171A (en) * 1993-05-28 1996-07-16 Panasonic Technologies, Inc. Synthesis-based speech training system and method
US5741136A (en) * 1993-09-24 1998-04-21 Readspeak, Inc. Audio-visual work with a series of visual word symbols coordinated with oral word utterances
US5517595A (en) * 1994-02-08 1996-05-14 At&T Corp. Decomposition in noise and periodic signal waveforms in waveform interpolation
US5429513A (en) * 1994-02-10 1995-07-04 Diaz-Plaza; Ruth R. Interactive teaching apparatus and method for teaching graphemes, grapheme names, phonemes, and phonetics
IL108908A (en) * 1994-03-09 1996-10-31 Speech Therapy Systems Ltd Speech therapy system
US5540589A (en) * 1994-04-11 1996-07-30 Mitsubishi Electric Information Technology Center Audio interactive tutor
US5799267A (en) * 1994-07-22 1998-08-25 Siegel; Steven H. Phonic engine
US5640490A (en) * 1994-11-14 1997-06-17 Fonix Corporation User independent, real-time speech recognition system and method
US5697789A (en) * 1994-11-22 1997-12-16 Softrade International, Inc. Method and system for aiding foreign language instruction
JPH10511472A (en) 1994-12-08 1998-11-04 ザ リージェンツ オブ ザ ユニバーシティ オブ カリフォルニア Method and apparatus for improving speech recognition between speech impaired persons
US5717828A (en) * 1995-03-15 1998-02-10 Syracuse Language Systems Speech recognition apparatus and method for learning
EP0769184B1 (en) * 1995-05-03 2000-04-26 Koninklijke Philips Electronics N.V. Speech recognition methods and apparatus on the basis of the modelling of new words
AU1128597A (en) * 1995-12-04 1997-06-27 Jared C. Bernstein Method and apparatus for combined information from speech signals for adaptive interaction in teaching and testing
IL120622A (en) * 1996-04-09 2000-02-17 Raytheon Co System and method for multimodal interactive speech and language training
US5766015A (en) * 1996-07-11 1998-06-16 Digispeech (Israel) Ltd. Apparatus for interactive language training
US5855513A (en) * 1996-08-26 1999-01-05 Tiger Electronics, Ltd. Electronic matching and position game
US5690493A (en) * 1996-11-12 1997-11-25 Mcalear, Jr.; Anthony M. Thought form method of reading for the reading impaired
US6109107A (en) * 1997-05-07 2000-08-29 Scientific Learning Corporation Method and apparatus for diagnosing and remediating language-based learning impairments
US5868683A (en) * 1997-10-24 1999-02-09 Scientific Learning Corporation Techniques for predicting reading deficit based on acoustical measurements
US6159014A (en) * 1997-12-17 2000-12-12 Scientific Learning Corp. Method and apparatus for training of cognitive and memory systems in humans
US5927988A (en) * 1997-12-17 1999-07-27 Jenkins; William M. Method and apparatus for training of sensory and perceptual systems in LLI subjects
US6019607A (en) * 1997-12-17 2000-02-01 Jenkins; William M. Method and apparatus for training of sensory and perceptual systems in LLI systems
US6261101B1 (en) * 1997-12-17 2001-07-17 Scientific Learning Corp. Method and apparatus for cognitive training of humans using adaptive timing of exercises
US6052512A (en) * 1997-12-22 2000-04-18 Scientific Learning Corp. Migration mechanism for user data from one client computer system to another
US5957699A (en) * 1997-12-22 1999-09-28 Scientific Learning Corporation Remote computer-assisted professionally supervised teaching system
US6120298A (en) * 1998-01-23 2000-09-19 Scientific Learning Corp. Uniform motivation for multiple computer-assisted training systems
US6067638A (en) * 1998-04-22 2000-05-23 Scientific Learning Corp. Simulated play of interactive multimedia applications for error detection
US6113645A (en) * 1998-04-22 2000-09-05 Scientific Learning Corp. Simulated play of interactive multimedia applications for error detection
US6076060A (en) * 1998-05-01 2000-06-13 Compaq Computer Corporation Computer method and apparatus for translating text to sound
US6078885A (en) * 1998-05-08 2000-06-20 At&T Corp Verbal, fully automatic dictionary updates by end-users of speech synthesis and recognition systems
US6099318A (en) * 1998-05-21 2000-08-08 Mcleod; Deandra Educational card game
US6036496A (en) * 1998-10-07 2000-03-14 Scientific Learning Corporation Universal screen for language learning impaired subjects
US6026361A (en) * 1998-12-03 2000-02-15 Lucent Technologies, Inc. Speech intelligibility testing system
US6234802B1 (en) * 1999-01-26 2001-05-22 Microsoft Corporation Virtual challenge system and method for teaching a language
JP3066528B1 (en) * 1999-02-26 2000-07-17 コナミ株式会社 Music playback system, rhythm analysis method and recording medium
US6151571A (en) * 1999-08-31 2000-11-21 Andersen Consulting System, method and article of manufacture for detecting emotion in voice signals through analysis of a plurality of voice signal parameters
US6347996B1 (en) * 2000-09-12 2002-02-19 Wms Gaming Inc. Gaming machine with concealed image bonus feature

Also Published As

Publication number Publication date
US5927988A (en) 1999-07-27
US6328569B1 (en) 2001-12-11
DE69815507T2 (en) 2004-04-22
US6334776B1 (en) 2002-01-01
US6599129B2 (en) 2003-07-29
US6358056B1 (en) 2002-03-19
US6364666B1 (en) 2002-04-02
AU1823999A (en) 1999-07-05
DE69815507D1 (en) 2003-07-17
US6334777B1 (en) 2002-01-01
EP0963583B1 (en) 2003-06-11
US6210166B1 (en) 2001-04-03
WO1999031640A1 (en) 1999-06-24
AU753429B2 (en) 2002-10-17
US20020034717A1 (en) 2002-03-21
US6190173B1 (en) 2001-02-20
US6331115B1 (en) 2001-12-18
US6224384B1 (en) 2001-05-01
EP0963583A1 (en) 1999-12-15

Similar Documents

Publication Publication Date Title
US5927988A (en) Method and apparatus for training of sensory and perceptual systems in LLI subjects
US6019607A (en) Method and apparatus for training of sensory and perceptual systems in LLI systems
US6629844B1 (en) Method and apparatus for training of cognitive and memory systems in humans
US6261101B1 (en) Method and apparatus for cognitive training of humans using adaptive timing of exercises
US6290504B1 (en) Method and apparatus for reporting progress of a subject using audio/visual adaptive training stimulii
US8083523B2 (en) Method for developing cognitive skills using spelling and word building on a computing device
US6146147A (en) Interactive sound awareness skills improvement system and method
US20060141425A1 (en) Method for developing cognitive skills in reading
US20050153267A1 (en) Rewards method and apparatus for improved neurological training
US20070065789A1 (en) Method for enhancing memory and cognition in aging adults
US20060177805A1 (en) Method for enhancing memory and cognition in aging adults
Nittrouer et al. Perceptual weighting strategies of children with cochlear implants and normal hearing
US5930757A (en) Interactive two-way conversational apparatus with voice recognition
Masterson et al. Use of technology in phonological intervention
US20070020595A1 (en) Method for enhancing memory and cognition in aging adults
Driessen Processing an Unfamiliar Regional Accent of English by Dutch Second Language Learners of English
Carletti Developing phonemic and phonological awareness in Italian EFL learners: a proposal of original and engaging teaching materials for primary school children.
Robbins et al. Sound approach: Using phonemic awareness to teach reading and spelling
Silver Responding to Sound through Toys, the Environment, and Speech
Ball Right Eye vs. Left Eye
Mense et al. ready set
Cechová FACULTY OF EDUCATION

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued