WO2000046787A2 - System and method for automating transcription services - Google Patents
System and method for automating transcription services Download PDFInfo
- Publication number
- WO2000046787A2 WO2000046787A2 PCT/US2000/002808 US0002808W WO0046787A2 WO 2000046787 A2 WO2000046787 A2 WO 2000046787A2 US 0002808 W US0002808 W US 0002808W WO 0046787 A2 WO0046787 A2 WO 0046787A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- file
- training
- current
- text
- written text
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/221—Announcement of recognition results
Definitions
- the present invention relates in general to computer speech recognition systems and, in particular, to a system and method for automating the text transcription of voice dictation by various end users.
- Speech recognition programs are well known in the art. While these programs are ultimately useful in automatically converting speech into text, many users are dissuaded from using these programs because they require each user to spend a significant amount of time training the system. Usually this training begins by having each user read a series of pre-selected materials for approximately 20 minutes. Then, as the user continues to use the program, as words are improperly transcribed the user is expected to stop and train the program as to the intended word thus advancing the ultimate accuracy of the acoustic model. Unfortunately, most professionals (doctors, dentists, veterinarians, lawyers) and business executive are unwilling to spend the time developing the necessary acoustic model to truly benefit from the automated transcription.
- the present invention comprises, in part, a system for substantially automating transcription services for one or more voice users.
- the system includes means for creating a uniquely identified voice dictation file from a current user and an audio player used to audibly reproduce said uniquely identified voice dictation file. Both of these system elements can be implemented on the same or different general-purpose computers.
- the voice dictation file creating means includes a system for assigning unique file handles to audio files and an audio recorder, and further comprise means for operably connecting to a separate digital recording device and/or means for reading audio files from removable magnetic and other computer media.
- Each of the general purpose computers implementing the system may be remotely located from the other computers but in operable connection to each other by way of a computer network, direct telephone connection, via email or other Internet based transfer.
- the system further includes means for manually inputting and creating a transcribed file based on humanly perceived contents of the uniquely identified voice dictation file.
- a human transcriptionist manually transcribes a textual version of the audio — using a text editor or word processor — based on the output of the output of the audio player.
- the system also includes means for automatically converting the voice dictation file into written text.
- the automatic speech converting means may be a preexisting speech recognition program, such as Dragon Systems' Naturally Speaking, IBM's Via Voice or Philips Corporation's Magic Speech. In such a case, the automatic speech converting means includes means for automating responses to a series of interactive inquiries from the preexisting speech recognition program.
- the system also includes means for manually selecting a specialized language model.
- the system further includes means for manually editing the resulting written text to create a verbatim text of the voice dictation file.
- this verbatim text will have to be created completely manually.
- the automatic speech converting means has begun to sufficiently develop that user's acoustic model a more automated means can be used.
- that manual editing means includes means for sequentially comparing a copy of the written text with the transcribed file resulting in a sequential list of unmatched words culled from the copy of said written text.
- the manual editing means further includes means for incrementally searching for the current unmatched word contemporaneously within a first buffer associated with the speech recognition program containing the written text and a second buffer associated with the sequential list.
- the preferred manual editing means includes means for correcting the current unmatched word in the second buffer, which includes means for displaying the current unmatched word in a manner substantially visually isolated from other text in the written text and means for playing a portion of the voice dictation recording from said first buffer associated with said current unmatched word.
- the manual input means further includes means for alternatively viewing the current unmatched word in context within the written text. For instance, the operator may wish to view the unmatched within the sentence in which it appears or perhaps with only is immediately adjacent words.
- the manner substantially visual isolation can be manually selected from the group containing word-by-word display, sentence-by- sentence display, and said current unmatched word display.
- the manual editing means portion of the complete system may also be utilized as a separate apparatus.
- the system may also include means for determining the skill of a human transcriptionist. In one approach, this accuracy determination can be made by determining the ratio of the number of words in the sequential list of unmatched words to the number of words in the written text.
- the system additionally includes means for training the automatic speech converting means to achieve higher accuracy for the current user.
- the training means uses the verbatim text created by the manual editing means and the voice dictation file.
- the training means may also comprise a preexisting training portion of the preexisting speech recognition program.
- the training means would also include means for automating responses to a series of interactive inquiries from the preexisting training portion of the speech recognition program. This functionality can be used, for instance, to establish a new language model (i.e. foreign language).
- the system finally includes means for controlling the flow of the voice dictation file based upon the training status of the current user using the unique identification.
- the control means reads and modifies a user's training status such that it is an appropriate selection from the group of pre-enrollment, enrollment, training, automation and stop automation.
- the control means further includes means for creating a user identification and acoustic model within the automatic speech converting means.
- the control means routes the voice dictation file to the automatic speech converting means and the manual input means, routes the written text and the transcribed file to the manual editing means, routes the verbatim text to the training means and routes the transcribed file back to the current user as a finished text.
- control means routes (1) the voice dictation file to the automatic speech converting means and the manual input means, (2) routes the written text and the transcribed file to the manual editing means, (3) routes the verbatim text to the training means and (4) routes the transcribed file back to the current user as a finished text.
- control means routes (1) the voice dictation file only to the automatic speech converting means and (2) the written text back to the current user as a finished text.
- the present application also discloses a method for automating transcription services for one or more voice users in a system including a manual transcription station and a speech recognition program.
- the method comprising the steps of: (1) establishing a profile for each of the voice users, the profile containing a training status; (2) creating a uniquely identified voice dictation file from a current voice user; (3) choosing the training status of the current voice user from the group of enrollment, training, automated and stop automation; (4) routing the voice dictation file to at least one of the manual transcription station and the speech recognition program based on the training status; (5) receiving the voice dictation file in at least one of the manual transcription station and the speech recognition program; (6) creating a transcribed file at the manual transcription station for each received voice dictation file; (7) automatically creating a written text with the speech recognition program for each received voice dictation file if the training status of the current user is training or automated; (8) manually establishing a verbatim file if the training status of the current user is enrollment or training; (9) training the speech recognition program
- Fig. 1 of the drawings is a block diagram of one potential embodiment of the present system for substantially automating transcription services for one or more voice users;
- Fig. lb of the drawings is a block diagram of a general-purpose computer which may be used as a dictation station, a transcription station and the control means within the present system;
- Fig. 2a of the drawings is a flow diagram of the main loop of the control means of the present system
- Fig. 2b of the drawings is a flow diagram of the enrollment stage portion of the control means of the present system
- Fig. 2c of the drawings is a flow diagram of the training stage portion of the control means of the present system
- Fig. 2d of the drawings is a flow diagram of the automation stage portion of the control means of the present system
- Fig. 3 of the drawings is a directory structure used by the control means in the present system
- Fig. 4 of the drawings is a block diagram of a portion of a preferred embodiment of the manual editing means.
- Fig. 5 of the drawings is an elevation view of the remainder of a preferred embodiment of the manual editing means.
- Fig. 1 of the drawings generally shows one potential embodiment of the present system for substantially automating transcription services for one or more voice users.
- the present system must include some means for receiving a voice dictation file from a current user.
- This voice dictation file receiving means can be a digital audio recorder, an analog audio recorder, or standard means for receiving computer files on magnetic media or via a data connection.
- the system 100 includes multiple digital recording stations 10, 11, 12 and 13.
- Each digital recording station has at least a digital audio recorder and means for identifying the current voice user.
- each of these digital recording stations is implemented on a general- purpose computer (such as computer 20), although a specialized computer could be developed for this specific purpose.
- the general-purpose computer though has the added advantage of being adaptable to varying uses in addition to operating within the present system 100.
- the general-purpose computer should have, among other elements, a microprocessor (such as the Intel Corporation PENTIUM, Cyrix K6 or Motorola 68000 series); volatile and non-volatile memory; one or more mass storage devices (i.e.
- HDD (not shown), floppy drive 21, and other removable media devices 22 such as a CD-ROM drive, DITTO, ZIP or JAZ drive (from Iomega Corporation) and the like); various user input devices, such as a mouse 23, a keyboard 24, or a microphone 25; and a video display system 26.
- the general-purpose computer is controlled by the WINDOWS 9.x operating system. It is contemplated, however, that the present system would work equally well using a MACINTOSH computer or even another operating system such as a WINDOWS CE, UNIX or a JAVA based operating system, to name a few.
- the general-purpose computer in an embodiment utilizing an analog audio input (via microphone 25) the general-purpose computer must include a sound-card (not shown). Of course, in an embodiment with a digital input no sound card would be necessary.
- digital audio recording stations 10, 1 1, 12 and 13 are loaded and configured to run digital audio recording software on a
- PENTIUM-based computer system operating under WINDOWS 9.x Such digital recording software is available as a utility in the WINDOWS 9.x operating system or from various third party vendor such as The Programmers' Consortium, Inc. of Oakton, Virginia (VOICEDOC), Syntrillium Corporation of Phoenix, Arizona (COOL EDIT) or Dragon Systems Corporation (Dragon Naturally Speaking Professional Edition). These various software programs produce a voice dictation file in the form of a "WAV" file.
- VOICEDOC The Programmers' Consortium, Inc. of Oakton, Virginia
- COOL EDIT Syntrillium Corporation of Phoenix, Arizona
- Dragon Systems Corporation Dragon Naturally Speaking Professional Edition
- These various software programs produce a voice dictation file in the form of a "WAV" file.
- WAV voice dictation file
- other audio file formats such as MP3 or DSS, could also be used to format the voice dictation file, without departing from the spirit of the present invention.
- VOICEDOC software that software also automatically assigns a file handle to the
- dedicated digital recorder 14 such as the Olympus Digital Voice Recorder D- 1000 manufactured by the Olympus Corporation.
- dedicated digital recorder 14 In order to harvest the digital audio text file, upon completion of a recording, dedicated digital recorder 14 would be operably connected to one of the digital audio recording stations, such as 13, toward downloading the digital audio file into that general-purpose computer. With this approach, for instance, no audio card would be required.
- Another alternative for receiving the voice dictation file may consist of using one form or another of removable magnetic media containing a pre-recorded audio file. With this alternative an operator would input the removable magnetic media into one of the digital audio recording stations toward uploading the audio file into the system.
- a DSS file format may have to be changed to a WAV file format, or the sampling rate of a digital audio file may have to be upsampled or downsampled.
- Olympus Digital Voice Recorder with Dragon Naturally Speaking, Olympus' 8MHz rate needs to be upsampled to 1 1 MHz.
- Software to accomplish such pre-processing is available from a variety of sources including Syntrillium Corporation and Olympus Corporation.
- the other aspect of the digital audio recording stations is some means for identifying the current voice user.
- the identifying means may include keyboard 24 upon which the user (or a separate operator) can input the current user's unique identification code.
- the user identification can be input using a myriad of computer input devices such as pointing devices (e.g. mouse 23), a touch screen (not shown), a light pen (not shown), bar-code reader (not shown) or audio cues via microphone 25, to name a few.
- the identifying means may also assign that user an identification number after receiving potentially identifying information from that user, including: (1) name; (2) address; (3) occupation; (4) vocal dialect or accent; etc.
- a voice user profile and a sub-directory within the control means are established.
- a user identification must be established for each voice user and subsequently provided with a corresponding digital audio file for each use such that the control means can appropriately route and the system ultimately transcribe the audio.
- the identifying means may also seek the manual selection of a specialty vocabulary. It is contemplated that the specialty vocabulary sets may be general for various users such as medical (i.e.
- the digital audio recording stations may be operably connected to system 100 as part of computer network 30 or, alternatively, they may be operably connected to the system via internet host 15.
- the general- purpose computer can be connected to both network jack 27 and telephone jack.
- connection may be accomplished by e-mailing the audio file via the Internet.
- Another method for completing such connection is by way of direct modem connection via remote control software, such as PC ANYWHERE, which is available from Symantec Corporation of Cupertino, California.
- remote control software such as PC ANYWHERE, which is available from Symantec Corporation of Cupertino, California.
- the IP address of digital audio recording station 10 or internet host 15 is known, to transfer the audio file using basic file transfer protocol.
- Control means 200 controls the flow of voice dictation file based upon the training status of the current voice user.
- control means 200 comprises a software program operating on general purpose computer 40.
- the program is initialized in step 201 where variable are set, buffers cleared and the particular configuration for this particular installation of the control means is loaded.
- Control means continually monitors a target directory (such as "current" (shown in Fig. 3)) to determine whether a new file has been moved into the target, step 202. Once a new file is found (such as "6723. id" (shown in Fig. 3)), a determination is made as to whether or not the current user 5 (shown in Fig.
- step 203 For each new user (as indicated by the existence of a ".pro" file in the "current" subdirectory), a new subdirectory is established, step 204 (such as the "usern” subdirectory (shown in Fig. 3)).
- This subdirectory is used to store all of the audio files ("xxxx.wav”), written text ("xxxx.wrt"), verbatim text ("xxxx.vb”), transcription text (“xxxx.txt”) and user profile ("usern.pro") for that particular user.
- Each particular job is assigned a unique number "xxxx” such that all of the files associated with a job can be associated by that number. With this directory structure, the number of users is practically limited only by storage space within general-purpose computer 40.
- the user profile is moved to the subdirectory, step 205.
- the contents of this user profile may vary between systems.
- the contents of one potential user profile is shown in Fig. 3 as containing: the user name, address, occupation and training status. Aside from the training status variable, which is necessary, the other data is useful in routing and transcribing the audio files.
- the control means having selected one set of files by the handle, determines the identity of the current user by comparing the ".id" file with its "user.tbl," step 206. Now that the user is known the user profile may be parsed from that user's subdirectory and the current training status determined, step 207. Steps 208-21 1 are the triage of the current training status is one of: enrollment, training, automate, and stop automation.
- Enrollment is the first stage in automating transcription services.
- the audio file is sent to transcription, step 301.
- the "xxxx.wav" file is transferred to transcriptionist stations 50 and 51.
- both stations are general-purpose computers, which run both an audio player and manual input means.
- the audio player is likely to be a digital audio player, although it is possible that an analog audio file could be transferred to the stations.
- Various audio players are commonly available including a utility in the WINDOWS 9.x operating system and various other third parties such from The Programmers' Consortium, Inc. of Oakton,
- manual input means is running on the computer at the same time.
- This manual input means may comprise any of text editor or word processor (such as MS WORD, WordPerfect, AmiPro or Word Pad) in combination with a keyboard, mouse, or other user-interface device.
- this manual input means may, itself, also be speech recognition software, such as Naturally Speaking from Dragon Systems of Newton, Massachusetts, Via Voice from IBM Corporation of Armonk, New York, or Speech Magic from Philips Corporation of Atlanta, Georgia.
- Human transcriptionist 6 listens to the audio file created by current user 5 and as is known, manually inputs the perceived contents of that recorded text, thus establishing the transcribed file, step 302.
- human transcriptionist 6 Being human, human transcriptionist 6 is likely to impose experience, education and biases on the text and thus not input a verbatim transcript of the audio file. Upon completion of the human transcription, the human transcriptionist 6 saves the file and indicates that it is ready for transfer to the current users subdirectory as "xxxx.txt", step 303.
- control means 200 starts the automatic speech conversion means, step 306.
- This automatic speech conversion means may be a preexisting program, such as Dragon System's Naturally Speaking, IBM's Via Voice or Philips' Speech Magic, to name a few. Alternatively, it could be a unique program that is designed to specifically perform automated speech recognition.
- Dragon Systems' Naturally Speaking has been used by running an executable simultaneously with Naturally Speaking that feeds phantom keystrokes and mousing operations through the WIN32 API, such that Naturally Speaking believes that it is interacting with a human being, when in fact it is being controlled by control means 200.
- Naturally Speaking believes that it is interacting with a human being, when in fact it is being controlled by control means 200.
- Such techniques are well known in the computer software testing art and, thus, will not be discussed in detail. It should suffice to say that by watching the application flow of any speech recognition program, an executable to mimic the interactive manual steps can be created.
- Control means provides the necessary information from the user profile found in the current user's subdirectory. All speech recognition require significant training to establish an acoustic model of a particular user. In the case of
- Dragon initially the program seeks approximately 20 minutes of audio usually obtained by the user reading a canned text provided by Dragon Systems. There is also functionality built into Dragon that allows “mobile training.” Using this feature, the verbatim file and audio file are fed into the speech recognition program to beginning training the acoustic model for that user, step 308. Regardless of the length of that audio file, control means 200 closes the speech recognition program at the completion of the file, step 309.
- a copy of the transcribed file is sent to the current user using the address information contained in the user profile, step 310.
- This address can be a street address or an e-mail address. Following that transmission, the program returns to the main loop on Fig. 2a.
- steps 401-403 are the same human transcription steps as steps 301-303 in the enrollment phase.
- control means 200 starts the automatic speech conversion means (or speech recognition program) and selects the current user, step 404.
- the audio file is fed into the speech recognition program and a written text is established within the program buffer, step 405.
- this buffer is given the same file handle on very instance of the program. Thus, that buffer can be easily copied using standard operating system commands and manual editing can begin, step 406.
- the user inputs audio into the VOICEWARE system's VOICEDOC program, thus, creating a ".wav” file.
- the user selects a "transcriptionist.”
- This "transcriptionist” may be a particular human transcriptionist or may be the "computerized transcriptionist.” If the user selects a "computerized transcriptionist” they may also select whether that transcription is handled locally or remotely.
- This file is assigned a job number by the VOICEWARE server, which routes the job to the VOICESCRIBE portion of the system.
- VOICESCRIBE is used by the human transcriptionist to receive and playback the job's audio (".wav") file.
- the audio file is grabbed by the automatic speech conversion means.
- new jobs i.e. an audio file newly created by VOICEDOC
- VOICESCRIBE window having a window title formed by the job number of the current ".wav” file.
- An executable file running in the background “sees” the VOICESCRIBE window open and using the WJ-N32API determines the job number from the VOICESCRIBE window title.
- the executable file then launches the automatic speech conversion means.
- Dragon System's Naturally Speaking for instance, there is a built in function for performing speech recognition on a preexisting ".wav” file.
- the executable program feeds phantom keystrokes to Naturally Speaking to open the ".wav” file from the "current" directory (see Fig. 3) having the job number of the current job.
- the executable file resumes operation by selecting all of the text in the open Naturally Speaking window and copying it to the WINDOWS 9.x operating system clipboard. Then, using the clipboard utility, save the clipboard as a text file using the current job number with a "dmt" suffix. The executable file then "clicks" the "complete” button in VOICESCRIBE to return the "dmt" file to the VOICEWARE server.
- the foregoing procedure can be done utilizing other digital recording software and other automatic speech conversion means. Additionally, functionality analogous to the WINDOWS clipboard exists in other operating systems.
- the user Upon completion of the dictation, the user presses a button labeled "return” (generated by a background executable file), which executable then commences a macro that gets the current job number from VOICEWARE (in the manner describe above), selects all of the text in the document and copies it to the clipboard. The clipboard is then saved to the file " ⁇ jobnumber>.dmt,” as discussed above. The executable then “clicks” the "complete” button (via the WIN32API) in VOICESCRIBE, which effectively returns the automatically transcribed text file back to the VOICEWARE server, which, in turn, returns the completed transcription to the VOICESCRIBE user.
- return generated by a background executable file
- the present invention also includes means for improving on that task.
- the transcribed file (“3333.txt”) and the copy of the written text (“3333. wit”) are sequentially compared word by word 406a toward establishing sequential list of unmatched words 406b that are culled from the copy of the written text.
- This list has a beginning and an end and pointer 406c to the current unmatched word.
- Underlying the sequential list is another list of objects which contains the original unmatched words, as well as the words immediately before and after that unmatched word, the starting location in memory of each unmatched word in the sequential list of unmatched words 406b and the length of the unmatched word.
- the unmatched word pointed at by pointer 406c from list 406b is displayed in substantial visual isolation from the other text in the copy of the written text on a standard computer monitor 500 in an active window 501.
- the context of the unmatched word can be selected by the operator to be shown within the sentence it resides, word by word or in phrase context, by clicking on buttons 514, 515, and 516, respectively.
- background window 502 which contains the copy of the written text file.
- a incremental search has located (see pointer 503) the next occurrence of the current unmatched word "cash.” Contemporaneously therewith, within window 505 containing the buffer from the speech recognition program, the same incremental search has located (see pointer 506) the next occurrence of the current unmatched word.
- a human user will likely only being viewing active window 501 activate the audio replay from the speech recognition program by clicking on "play" button 510, which plays the audio synchronized to the text at pointer 506. Based on that snippet of speech, which can be played over and over by clicking on the play button, the human user can manually input the correction to the current unmatched word via keyboard, mousing actions, or possibly even audible cues to another speech recognition program running within this window.
- buttons 514, 515 and 516 even if the choice of isolated context offered by buttons 514, 515 and 516, it may still be difficult to determine the correct verbatim word out-of - context, accordingly there is a switch window button 513 that will move background window 502 to the foreground with visible pointer 503 indicating the current location within the copy of the written text. The user can then return to the active window and input the correct word, "trash.” This change will only effect the copy of the written text displayed in background window 502.
- the operator clicks on the advance button 511, which advances pointer 406c down the list of unmatched words and activates the incremental search in both window 502 and 505.
- This unmatched word is now displayed in isolation and the operator can play the synchronized speech from the speech recognition program and correct this word as well.
- This list is traversed in object by object fashion, but alternatively each of the records could be padded such that each item has the same word size to assist in bi-directional traversing of the list.
- the unmatched words in this underlying list are read only it is possible to return to the original unmatched word such that the operator can determine if a different correction should have been made.
- the copy of the written text is finally corrected resulting in a verbatim copy, which is saved to the user's subdirectory.
- the verbatim file is also passed to the speech recognition program for training, step 407.
- the new (and improved) acoustic model is saved, step 408, and the speech recognition program is closed, step 409.
- the transcribed file is returned to the user, as in step 310 from the enrollment phase.
- the system may also include means for determining the accuracy rate from the output of the sequential comparing means. Specifically, by counting the number of words in the written text and the number of words in list 406b the ratio of words in said sequential list to words in said written text can be determined, thus providing an accuracy percentage. As before, it is a matter of choice when to advance users from one stage to another. Once that goal is reached, the user's profile is changed to the next stage, step 211.
- One potential enhancement or derivative functionality is provided by the determination of the accuracy percentage.
- this percentage could be used to evaluate a human transcriptionist' s skills.
- the associated ".wav" file would be played for the human transcriptionist and the foregoing comparison would be performed on the transcribed text versus the verbatim file created by the foregoing process. In this manner, additional functionality can be provided by the present system.
- the speech recognition software is started, step 600, and the current user selected, step 601. If desired, a particularized vocabulary may be selected, step 602. Then automatic conversion of the digital audio file recorded by the current user may commence, step 603. When completed, the written file is transmitted to the user based on the information contained in the user profile, step 604 and the program is returned to the main loop.
- the system administrator may set the training status variable to a stop automation state in which steps 301, 302, 303, 305 and 310 (see Fig. 2b) are the only steps performed.
Abstract
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/889,870 US7006967B1 (en) | 1999-02-05 | 2000-02-04 | System and method for automating transcription services |
CA002362462A CA2362462A1 (en) | 1999-02-05 | 2000-02-04 | System and method for automating transcription services |
AU35882/00A AU3588200A (en) | 1999-02-05 | 2000-02-04 | System and method for automating transcription services |
GB0118231A GB2361569B (en) | 1999-02-05 | 2000-02-04 | System and method for automating transcription services |
US10/014,677 US20020095290A1 (en) | 1999-02-05 | 2001-12-11 | Speech recognition program mapping tool to align an audio file to verbatim text |
HK02101880.9A HK1041086A1 (en) | 1999-02-05 | 2002-03-12 | System and method for automating transcription services |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11894999P | 1999-02-05 | 1999-02-05 | |
US60/118,949 | 1999-02-05 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2000046787A2 true WO2000046787A2 (en) | 2000-08-10 |
WO2000046787A3 WO2000046787A3 (en) | 2000-12-14 |
Family
ID=22381731
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2000/002808 WO2000046787A2 (en) | 1999-02-05 | 2000-02-04 | System and method for automating transcription services |
Country Status (5)
Country | Link |
---|---|
AU (1) | AU3588200A (en) |
CA (1) | CA2362462A1 (en) |
GB (1) | GB2361569B (en) |
HK (1) | HK1041086A1 (en) |
WO (1) | WO2000046787A2 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE10126020A1 (en) * | 2001-05-28 | 2003-01-09 | Olaf Berberich | Automatic conversion of words spoken by speaker into digitally coded terms for processing by computer involves displaying term rejections in correction window for direct entry correction |
GB2381638A (en) * | 2001-11-03 | 2003-05-07 | Dremedia Ltd | Identifying audio characteristics |
US7174296B2 (en) | 2001-03-16 | 2007-02-06 | Koninklijke Philips Electronics N.V. | Transcription service stopping automatic transcription |
US7206303B2 (en) | 2001-11-03 | 2007-04-17 | Autonomy Systems Limited | Time ordered indexing of an information stream |
WO2008041083A2 (en) * | 2006-10-02 | 2008-04-10 | Bighand Ltd. | Digital dictation workflow system and method |
US7383187B2 (en) | 2001-01-24 | 2008-06-03 | Bevocal, Inc. | System, method and computer program product for a distributed speech recognition tuning platform |
US8024289B2 (en) | 2007-07-31 | 2011-09-20 | Bighand Ltd. | System and method for efficiently providing content over a thin client network |
US20200152200A1 (en) * | 2017-07-19 | 2020-05-14 | Alibaba Group Holding Limited | Information processing method, system, electronic device, and computer storage medium |
CN116074150A (en) * | 2023-03-02 | 2023-05-05 | 广东浩博特科技股份有限公司 | Switch control method and device for intelligent home and intelligent home |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5799273A (en) * | 1996-09-24 | 1998-08-25 | Allvoice Computing Plc | Automated proofreading using interface linking recognized words to their audio data while text is being changed |
US5875448A (en) * | 1996-10-08 | 1999-02-23 | Boys; Donald R. | Data stream editing system including a hand-held voice-editing apparatus having a position-finding enunciator |
-
2000
- 2000-02-04 CA CA002362462A patent/CA2362462A1/en not_active Abandoned
- 2000-02-04 WO PCT/US2000/002808 patent/WO2000046787A2/en active Application Filing
- 2000-02-04 AU AU35882/00A patent/AU3588200A/en not_active Abandoned
- 2000-02-04 GB GB0118231A patent/GB2361569B/en not_active Expired - Fee Related
-
2002
- 2002-03-12 HK HK02101880.9A patent/HK1041086A1/en unknown
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5799273A (en) * | 1996-09-24 | 1998-08-25 | Allvoice Computing Plc | Automated proofreading using interface linking recognized words to their audio data while text is being changed |
US5875448A (en) * | 1996-10-08 | 1999-02-23 | Boys; Donald R. | Data stream editing system including a hand-held voice-editing apparatus having a position-finding enunciator |
Non-Patent Citations (1)
Title |
---|
"Dragon Dictate for Windows 2.0", User's Guide. British version, First edition, Dragon Systems, Inc., Newton, Massachusetts, pp. 1-230, XP002929983. * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7383187B2 (en) | 2001-01-24 | 2008-06-03 | Bevocal, Inc. | System, method and computer program product for a distributed speech recognition tuning platform |
US7174296B2 (en) | 2001-03-16 | 2007-02-06 | Koninklijke Philips Electronics N.V. | Transcription service stopping automatic transcription |
DE10126020A1 (en) * | 2001-05-28 | 2003-01-09 | Olaf Berberich | Automatic conversion of words spoken by speaker into digitally coded terms for processing by computer involves displaying term rejections in correction window for direct entry correction |
US8972840B2 (en) | 2001-11-03 | 2015-03-03 | Longsand Limited | Time ordered indexing of an information stream |
GB2381638A (en) * | 2001-11-03 | 2003-05-07 | Dremedia Ltd | Identifying audio characteristics |
GB2381638B (en) * | 2001-11-03 | 2004-02-04 | Dremedia Ltd | Identifying audio characteristics |
US7206303B2 (en) | 2001-11-03 | 2007-04-17 | Autonomy Systems Limited | Time ordered indexing of an information stream |
US7292979B2 (en) | 2001-11-03 | 2007-11-06 | Autonomy Systems, Limited | Time ordered indexing of audio data |
WO2008041083A2 (en) * | 2006-10-02 | 2008-04-10 | Bighand Ltd. | Digital dictation workflow system and method |
WO2008041083A3 (en) * | 2006-10-02 | 2008-08-28 | Bighand Ltd | Digital dictation workflow system and method |
US8024289B2 (en) | 2007-07-31 | 2011-09-20 | Bighand Ltd. | System and method for efficiently providing content over a thin client network |
US20200152200A1 (en) * | 2017-07-19 | 2020-05-14 | Alibaba Group Holding Limited | Information processing method, system, electronic device, and computer storage medium |
US11664030B2 (en) * | 2017-07-19 | 2023-05-30 | Alibaba Group Holding Limited | Information processing method, system, electronic device, and computer storage medium |
CN116074150A (en) * | 2023-03-02 | 2023-05-05 | 广东浩博特科技股份有限公司 | Switch control method and device for intelligent home and intelligent home |
CN116074150B (en) * | 2023-03-02 | 2023-06-09 | 广东浩博特科技股份有限公司 | Switch control method and device for intelligent home and intelligent home |
Also Published As
Publication number | Publication date |
---|---|
GB2361569B (en) | 2003-12-24 |
HK1041086A1 (en) | 2002-06-28 |
GB2361569A (en) | 2001-10-24 |
AU3588200A (en) | 2000-08-25 |
GB0118231D0 (en) | 2001-09-19 |
WO2000046787A3 (en) | 2000-12-14 |
CA2362462A1 (en) | 2000-08-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1183680B1 (en) | Automated transcription system and method using two speech converting instances and computer-assisted correction | |
US6961699B1 (en) | Automated transcription system and method using two speech converting instances and computer-assisted correction | |
US7006967B1 (en) | System and method for automating transcription services | |
US6122614A (en) | System and method for automating transcription services | |
US5875448A (en) | Data stream editing system including a hand-held voice-editing apparatus having a position-finding enunciator | |
US7516070B2 (en) | Method for simultaneously creating audio-aligned final and verbatim text with the assistance of a speech recognition program as may be useful in form completion using a verbal entry method | |
US7979281B2 (en) | Methods and systems for creating a second generation session file | |
US6490558B1 (en) | System and method for improving the accuracy of a speech recognition program through repetitive training | |
US20020095290A1 (en) | Speech recognition program mapping tool to align an audio file to verbatim text | |
US20050102146A1 (en) | Method and apparatus for voice dictation and document production | |
US6915258B2 (en) | Method and apparatus for displaying and manipulating account information using the human voice | |
US20100169092A1 (en) | Voice interface ocx | |
WO2000046787A2 (en) | System and method for automating transcription services | |
US20110113357A1 (en) | Manipulating results of a media archive search | |
AU2004233462B2 (en) | Automated transcription system and method using two speech converting instances and computer-assisted correction | |
GB2390930A (en) | Foreign language speech recognition | |
WO2001009877A9 (en) | System and method for improving the accuracy of a speech recognition program | |
WO2001093058A1 (en) | System and method for comparing text generated in association with a speech recognition program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
AK | Designated states |
Kind code of ref document: A3 Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A3 Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
WWE | Wipo information: entry into national phase |
Ref document number: 09889870 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 200118231 Country of ref document: GB Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 35882/00 Country of ref document: AU |
|
WWE | Wipo information: entry into national phase |
Ref document number: IN/PCT/2001/780/KOL Country of ref document: IN |
|
ENP | Entry into the national phase |
Ref document number: 2362462 Country of ref document: CA Ref document number: 2362462 Country of ref document: CA Kind code of ref document: A |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
122 | Ep: pct application non-entry in european phase |