Search Results for: stt

Results 1 - 10 of 32 Page 1 of 4
Results per-page: 10 | 20 | 50 | 100

SPE3 – Releases and Changelogs

     Posted on: 2020-07-30

Speech Engine (SPE) is developed as RESTfull API on top of Phonexia BSAPI. SPE was formerly known as BSAPI-rest (up to v2.x) or as Phonexia Server (up to v3.2.x). This page lists changes in SPE releases. Releases Changelogs Speech Engine 3.30.10 (07/29/2020) - DB v1401, BSAPI 3.30.10 Public release New: Updated STT model RU_RU_A to version 4.4.0 Speech Engine 3.31.1 (07/02/2020) - DB v1500, BSAPI 3.31.0 Non-public Feature Preview release Fixed: SQLite database update from version v1401 fails Speech Engine 3.31.0 (07/01/2020) - DB v1500, BSAPI 3.31.0 Non-public Feature Preview release New: SPE now requires CentOS 7 or other Linux…

Browser3 – Releases and Changelogs

     Posted on: 2020-07-24

Phonexia Browser v3 (Browser3) is developed as client on top of Phonexia Speech Engine v3. Phonexia Browser is a successor of Phonexia Speech Intelligence Resolver v1 (SIR1). This page lists changes in Browser releases. Releases Changelogs Phonexia Browser v3.31.2, BSAPI 3.31.0 - Jul 24 2020 Non-public Feature Preview release Fixed: STT result version mismatch Phonexia Browser v3.31.1, BSAPI 3.31.0 - Jul 08 2020 Non-public Feature Preview release New: Browser now requires CentOS 7 or other Linux based OS with glibc >= 2.17 Version 3.31.0 was skipped Phonexia Browser v3.30.8, BSAPI 3.30.8 - Jun 29 2020 Public release Fixed: SID Evaluator…

Language Identification (LID)

     Posted on: 2020-07-09

Phonexia Language Identification (LID) will help you distinguish the spoken language or dialect. It will enable your system to automatically route valuable calls to your experts in the given language or to send them to other software for analysis. Phonexia uses state-of-the-art language identification (LID) technology based on iVectors that were introduced by NIST (National Institute of Standards and Technology, USA) during the 2010 evaluations. The technology is independent on text and channel. This highly accurate technology uses the power of voice biometrics to automatically recognize spoken language. Application areas Preselecting multilingual sources and routing audio streams/files to language dependent…

How to convert STT confusion network results to one-best

     Posted on: 2020-04-06

Confusion Network output is the most detailed Speech Engine STT output as it provides multiple word alternatives for individual timeslots of processed speech signal. Therefore many applications want use it as the main source of speech transcription and perform eventual conversion to less verbose output formats internally. This article provides the recommended way to do the conversion. Time slots and word alternatives: The recommended algorithm for converting Confusion Network (CN) to One-best is as follows: loop through all CN timeslots from start to end in each timeslot, get the input alternative with highest score and if it's not <null/> or…

What is a user configuration file and how to use it

     Posted on: 2020-03-28

Advanced users with appropriate knowledge (gained e.g. by taking the Phonexia Academy Advanced Training) may want to finetune behavior of the technologies to adapt to the nature of their audio data. Modifying original BSAPI configuration files directly can be dangerous – inappropriate changes may cause unpredicatble behavior and without having a backup of the unmodified file it's difficult to restore working state. User configuration files provide a way to override processing parameters without modifying original BSAPI configuration files. WARNING: Inappropriate configuration changes may cause serious issues! Make sure you really know what you are doing. User configuration file is a…

How to configure STT realtime stream word detection parameters

     Posted on: 2020-03-28

One of the improvements implemented since Speech Engine 3.24 is neural-network based VAD, used for word- and segment detection. This article describes the segmenter configuration parameters and how they are affecting the realtime stream STT results. The default segmenter parametrs are as shown below: [vad.online_segmenter:SOnlineVoiceActivitySegmenterI] backward_extensions_length_ms=150 forward_extensions_length_ms=750 speech_threshold=0.5 Backward- and forward extension are intervals in miliseconds, which extend the part of the signal going to the decoder. Decoder is a component, which determines what a particular part of the signal contains (speech, silence, etc.). Based on that, decoder also decides whether segment has finished or not. Unlike in file processing…

How to configure Speech Engine workers

     Posted on: 2020-03-28

Worker is a working thread performing the actual files- or realtime streams processing in Speech Engine. This article helps to understand the Speech Engine workers and provides information how to configure workers for optimal performance and server utilization. The default workers configuration in settings/phxspe.properties is as shown below – 8 workers for files processing and 8 workers for realtime streams processing. These numbers mean the maximum number of simultaneously running tasks. # Multithread settings server.n_workers = 8 server.n_realtime_workers = 8 Requests for additional file processing tasks are put in a queue and processed according their order and priorities. Requests for…

Technical Training Essentials

     Posted on: 2019-09-27

Core objective: Understanding technical essentials of using Phonexia technologies and products Duration: ~94 minutes (7 + 19 + 22 + 23 + 23 min chapters) intended for product architects or developers assumes you have already watched Phonexia technologies introduction video assumes understanding of working in command line REST API principles processing JSON or XML Introduction (7 min) technologies recap CLI, REST and GUI interfaces overview https://youtu.be/xzrHyyIl01s MODULE 1: Getting started with Speech Engine (19 min) Installation Technologies configuration Server and database configuration Users configuration Files processing Synchronous and asynchronous requests, results polling Stream processing https://youtu.be/4qrB-GfFdWY MODULE 2: Filtering and supporting…

Speech To Text results explained

     Posted on: 2019-05-27

This article aims on giving more details about Speech To Text outputs and hints on how to tailor Speech To Text to suit best your needs. In the process of transcribing speech, the Speech To Text technology usually identifies multiple alternatives for individual speech segments, as multiple phrases can have similar pronunciations, possibly with different word boundaries, e.g. “eight tea machines” vs. “eighty machines”. The technology provides various output types which show only single or multiple transcription alternatives. For processing realtime streams, two result modes are supported – one mode provides complete transcription, second mode provides incremental results. Output types…