Search Results for: Audio Source Profile

Results 31 - 40 of 59 Page 4 of 6
Results per-page: 10 | 20 | 50 | 100

Phonexia Speech Engine

Relevance: 9%      Posted on: 2017-05-18

About Phonexia Speech Engine v3 (SPE3) is a main executive part of the Phonexia Speech Platform. It is a server application with REST API interface through which you can access all available speech technologies. Both, Linux 64bit and Windows 64bit operating systems are supported. Phonexia Speech Engine (SPE3) is adjustable server component which houses all speech technologies. SPE3 provides RESTfull application programming interface to access various technologies. Aside from technologies themselves the SPE has implemented other various functionality supporting work with speech technologies, recordings and streams, and others. Features Main purpose of SPE is to work as processing unit for…

Voice Activity Detection – Essential

Relevance: 9%      Posted on: 2018-04-04

Phonexia Voice Activity Detection (VAD) identifies parts of audio recordings with speech content vs. nonspeech content. Technology Trained with emphasis on spontaneous telephony conversation The technology is language-, accent-, text-, and channel- independent Compatibility with the widest range of audio sources possible (applies channel compensation techniques): GSM/CDMA, 3G, VoIP, landlines, etc. Input Input format for processing: WAV or RAW (8 or 16 bits linear coding), A-law or Mu-law, PCM, 8kHz+ sampling Output Log file with processed information (speech vs. nonspeech segments) Segmentation The section Segmentation describes the results of VAD, which are segments of detected voice and silence. Segments are…

Q: Please give me a recommendation for LID adaptation set.

Relevance: 9%      Posted on: 2017-06-27

A: The following is recommended: For adding new language to language pack 20+ hours of audio for each new language model (or 25+ hours of audio containing 80% of speech) Only 1 language per record For adapting the existing language model (discriminative training) 10+ hours of audio for each language May be done on customer site. May be done in Phonexia using anonymized data (= language-prints extracted from a .wav audio)

Phonexia End User License Agreement

Relevance: 9%      Posted on: 2019-02-27

Please read the terms and conditions of this End User License Agreement (the “Agreement”) carefully before you use the Phonexia proprietary software providing speech solutions, technologies and accompanying services (the “Software”) delivered and marketed by Phonexia s.r.o.

Account

Relevance: 9%      Posted on: 2018-03-21

Registered info: GDPR tools: Full name: Login name: E-mail: Change profile Change password Phonexia Partner Portal documents access level: Hints: General rules Registration for Phonexia Partner Portal is for free. But various user access levels are applied to the articles, some of them are available only for Phonexia Partners and Certified members. You may ask for promoting your access level by asking for business support on info@phonexia.com Legal documents By registration, login to and using this website you agree with the Privacy Policy and Terms of Service. .

What is a user configuration file and how to use it

Relevance: 9%      Posted on: 2020-03-28

Advanced users with appropriate knowledge (gained e.g. by taking the Phonexia Academy Advanced Training) may want to finetune behavior of the technologies to adapt to the nature of their audio data. Modifying original BSAPI configuration files directly can be dangerous – inappropriate changes may cause unpredicatble behavior and without having a backup of the unmodified file it's difficult to restore working state. User configuration files provide a way to override processing parameters without modifying original BSAPI configuration files. WARNING: Inappropriate configuration changes may cause serious issues! Make sure you really know what you are doing. User configuration file is a…

Keyword Spotting (KWS)

Relevance: 9%      Posted on: 2017-05-18

About KWS Phonexia Keyword Spotting (KWS) identifies occurrences of key-words and/or key-phrases in audio recordings. Application areas: Security/defense Maintain fast reaction times by routing calls with specific content to human operators Search for specific information in large call archives Trigger alarms immediately (online) when an event occurs Call centers Increase operator and supervisor efficiency by searching calls Identify inappropriate expressions from operators Check marketing campaigns with automatic script compliance control Mass media and web search servers Index and search multimedia by keyword Route multimedia files and streams according to their content   KWS technology Acoustic based technology robust even with…

Difference between on-the-fly and off-line type of transcription (STT)

Relevance: 9%      Posted on: 2017-12-11

Similarly as human, the ASR (STT) engine is doing the adaptation to an acoustic channel, environment and speaker. Also the ASR (STT) engine is learning more information about the content during time, that is used to improve recognition. The dictate engine, also known as on-the-fly transciption, does not look to the future and has information about just a few seconds of speech at the beginning of recordings. As the output is requested immediately during processing of the audio, recording engine can't predict what will come in next seconds of the speech. When access to the whole recording is granted during off-line transcription…

Privacy Policy

Relevance: 9%      Posted on: 2018-03-24

Phonexia s.r.o. with registered seat at Chaloupkova 3002/1a, 612 00 Brno, Czech Republic, is a developer and provider of speech technologies software products and related services. We appreciate your visit on our websites and we are pleased that you are interested in our software products and related services. We conform our data use to the European Union’s (“EU”) General Data Protection Regulation (“GDPR”). This Privacy Policy should help you to understand how we as a data controller gather, use and protect your personal information. 1. COLLECTING PERSONAL INFORMATION When you sign up for a Phonexia Account to allow you using…

How to convert STT confusion network results to one-best

Relevance: 9%      Posted on: 2020-04-06

Confusion Network output is the most detailed Speech Engine STT output as it provides multiple word alternatives for individual timeslots of processed speech signal. Therefore many applications want use it as the main source of speech transcription and perform eventual conversion to less verbose output formats internally. This article provides the recommended way to do the conversion. Time slots and word alternatives: The recommended algorithm for converting Confusion Network (CN) to One-best is as follows: loop through all CN timeslots from start to end in each timeslot, get the input alternative with highest score and if it's not <null/> or…