Search Results for: STT performance

Results 1 - 10 of 46Page 1 of 5
Results per-page: 10 | 20 | 50 | 100

SPE3 – Releases and Changelogs

Relevance: 100%      Posted on: 2021-05-03

Speech Engine (SPE) is developed as RESTfull API on top of Phonexia BSAPI. SPE was formerly known as BSAPI-rest (up to v2.x) or as Phonexia Server (up to v3.2.x). Releases Changelogs Speech Engine 3.40.2, DB v1700, BSAPI 3.40.2 (2021-04-30) Public release Fixed: LMC does not work with CS_CZ_6 online (stream) configuration Fixed: Sample rate in Opus files is incorrect Fixed: Various "[ERRFMT]" log messages fixes Speech Engine 3.40.1, DB v1700, BSAPI 3.40.1 (2021-04-16) Public release Fixed: 6th generation STT/KWS stream result may start with words from end of previous stream Fixed: Some licensing error messages are not shown in log…

Performance of the Speaker Identification 4th generation (SID4): Intel® Xeon® Platinum 8124M

Relevance: 62%      Posted on: 2019-10-30

Benchmark goals Find realistic performance using total recording length Find FTRT based exactly on net_speech (engineering sizing data) Find system performance using all physical cores Find system performance using all logical cores Infrastructure setup Intel® Xeon® Platinum 8124M is used in virtual machine with 8 physical cores reserved exclusively for this VM, Hyper Threading is enabled [16 logical cores available], 32GB RAM, 30GB SSD based storage, 1000 I/O.s-1  reserved per core Benchmark data setup Data set statistic: Number of files: 32 [300 seconds each] RAW recordings length ∑: 9600 [sec] Net speech length ∑: 4224.77 [sec] In the data set…

STT Language Model Customization tutorial

Relevance: 47%      Posted on: 2019-04-24

Language Model Customization tool (LMC) provides a way to improve the Speech To Text performance by creating customized language model. Language model is an important part of Phonexia Speech To Text. In a simplified way it can be imagined as a large dictionary with multiple statistics. The Speech To Text technology uses this dictionary and statistical model to convert audio signals into the proper text equivalents. Due to general diversity of spoken speech, the default generic language model may not acknowledge the importance of certain words over other words in certain situations. Language model customization is a way to inform…

What are STT preferred phrases and how to use them

Relevance: 27%      Posted on: 2020-11-26

Speech Engine version 3.32 and later includes new STT feature called Preferred phrases. This article explains what is the feature good for, how does it work internally and gives some tips for practical implementation. What are preferred phrases In the speech transcription tasks, there may be situations where similar sounding words get confused, e.g. "WiFi" vs. "HiFi", "route" vs. "root", "cell" vs. "sell", etc. Normally, the language model part of the Speech To Text does its job here and in the context of longer phrase or entire sentence prefers the correct word:  ×    I'm going to cell my car. Hmmm, such…

Arabic dialects in Phonexia LID and STT

Relevance: 25%      Posted on: 2021-01-18

Arabic language has (a) one standardised variety, and (b) many non-standard varieties (dialects). In this article, our linguistic team explains differences between Modern Standard Arabic and Arabic dialects in the context of Phonexia Arabic models. Standard variety:  Modern Standard Arabic (MSA) All Arabs learn it at school (not from their parents, so we cannot say it is their native variety) It is lingua franca (common language) for the Arabic world – like English for Europeans; however, Arabs speak it much better since they are schooled in MSA from early age MSA is more similar to some dialects (e.g. Levantine), but…

How to convert STT confusion network results to one-best

Relevance: 21%      Posted on: 2020-04-06

Confusion Network output is the most detailed Speech Engine STT output as it provides multiple word alternatives for individual timeslots of processed speech signal. Therefore many applications want use it as the main source of speech transcription and perform eventual conversion to less verbose output formats internally. This article provides the recommended way to do the conversion. Time slots and word alternatives: The recommended algorithm for converting Confusion Network (CN) to One-best is as follows: loop through all CN timeslots from start to end in each timeslot, get the input alternative with highest score and if it's not <null/> or…

What is STT words-to-numbers feature and how to use it

Relevance: 21%      Posted on: 2021-04-22

Speech Engine 3.30 and later includes new STT feature for native numbers and dates in n‍-best output. This article explains details of the feature and gives some tips for fine-tuning the results. NOTE: The feature is currently implemented for Czech and Slovak language only! If you would like to help adding support for other languages (available in 5th or newer generation), please contact your Phonexia sales representative, or support@phonexia.com. What is the words-to-numbers feature Words-to-numbers feature allows to convert raw transcription of numbers, dates (or similar patterns like credit card numbers) to their native form: two thousand twenty one ⇒…

How to configure STT realtime stream word detection parameters

Relevance: 19%      Posted on: 2020-03-28

One of the improvements implemented since Speech Engine 3.24 is neural-network based VAD, used for word- and segment detection. This article describes the segmenter configuration parameters and how they are affecting the realtime stream STT results. The default segmenter parametrs are as shown below: [vad.online_segmenter:SOnlineVoiceActivitySegmenterI] backward_extensions_length_ms=150 forward_extensions_length_ms=750 speech_threshold=0.5 Backward- and forward extension are intervals in miliseconds, which extend the part of the signal going to the decoder. Decoder is a component, which determines what a particular part of the signal contains (speech, silence, etc.). Based on that, decoder also decides whether segment has finished or not. Unlike in file processing…

Difference between on-the-fly and off-line type of transcription (STT)

Relevance: 18%      Posted on: 2017-12-11

Similarly as human, the ASR (STT) engine is doing the adaptation to an acoustic channel, environment and speaker. Also the ASR (STT) engine is learning more information about the content during time, that is used to improve recognition. The dictate engine, also known as on-the-fly transciption, does not look to the future and has information about just a few seconds of speech at the beginning of recordings. As the output is requested immediately during processing of the audio, recording engine can't predict what will come in next seconds of the speech. When access to the whole recording is granted during off-line transcription…