Struct gapi_grpc::google::cloud::speech::v1p1beta1::RecognitionConfig [−][src]
Provides information to the recognizer that specifies how to process the request.
Fields
encoding: i32Encoding of audio data sent in all RecognitionAudio messages.
This field is optional for FLAC and WAV audio files and required
for all other audio formats. For details, see [AudioEncoding][google.cloud.speech.v1p1beta1.RecognitionConfig.AudioEncoding].
sample_rate_hertz: i32Sample rate in Hertz of the audio data sent in all
RecognitionAudio messages. Valid values are: 8000-48000.
16000 is optimal. For best results, set the sampling rate of the audio
source to 16000 Hz. If that’s not possible, use the native sample rate of
the audio source (instead of re-sampling).
This field is optional for FLAC and WAV audio files, but is
required for all other audio formats. For details, see [AudioEncoding][google.cloud.speech.v1p1beta1.RecognitionConfig.AudioEncoding].
audio_channel_count: i32The number of channels in the input audio data.
ONLY set this for MULTI-CHANNEL recognition.
Valid values for LINEAR16 and FLAC are 1-8.
Valid values for OGG_OPUS are ‘1’-‘254’.
Valid value for MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE is only 1.
If 0 or omitted, defaults to one channel (mono).
Note: We only recognize the first channel by default.
To perform independent recognition on each channel set
enable_separate_recognition_per_channel to ‘true’.
enable_separate_recognition_per_channel: boolThis needs to be set to true explicitly and audio_channel_count > 1
to get each channel recognized separately. The recognition result will
contain a channel_tag field to state which channel that result belongs
to. If this is not true, we will only recognize the first channel. The
request is billed cumulatively for all channels recognized:
audio_channel_count multiplied by the length of the audio.
language_code: StringRequired. The language of the supplied audio as a BCP-47 language tag. Example: “en-US”. See Language Support for a list of the currently supported language codes.
alternative_language_codes: Vec<String>A list of up to 3 additional BCP-47 language tags, listing possible alternative languages of the supplied audio. See Language Support for a list of the currently supported language codes. If alternative languages are listed, recognition result will contain recognition in the most likely language detected including the main language_code. The recognition result will include the language tag of the language detected in the audio. Note: This feature is only supported for Voice Command and Voice Search use cases and performance may vary for other use cases (e.g., phone call transcription).
max_alternatives: i32Maximum number of recognition hypotheses to be returned.
Specifically, the maximum number of SpeechRecognitionAlternative messages
within each SpeechRecognitionResult.
The server may return fewer than max_alternatives.
Valid values are 0-30. A value of 0 or 1 will return a maximum of
one. If omitted, will return a maximum of one.
profanity_filter: boolIf set to true, the server will attempt to filter out
profanities, replacing all but the initial character in each filtered word
with asterisks, e.g. “f***”. If set to false or omitted, profanities
won’t be filtered out.
adaptation: Option<SpeechAdaptation>Speech adaptation configuration improves the accuracy of speech
recognition. When speech adaptation is set it supersedes the
speech_contexts field. For more information, see the speech
adaptation
documentation.
speech_contexts: Vec<SpeechContext>Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see speech adaptation.
enable_word_time_offsets: boolIf true, the top result includes a list of words and
the start and end time offsets (timestamps) for those words. If
false, no word-level time offset information is returned. The default is
false.
enable_word_confidence: boolIf true, the top result includes a list of words and the
confidence for those words. If false, no word-level confidence
information is returned. The default is false.
enable_automatic_punctuation: boolIf ‘true’, adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default ‘false’ value does not add punctuation to result hypotheses.
enable_spoken_punctuation: Option<bool>The spoken punctuation behavior for the call If not set, uses default behavior based on model of choice e.g. command_and_search will enable spoken punctuation by default If ‘true’, replaces spoken punctuation with the corresponding symbols in the request. For example, “how are you question mark” becomes “how are you?”. See https://cloud.google.com/speech-to-text/docs/spoken-punctuation for support. If ‘false’, spoken punctuation is not replaced.
enable_spoken_emojis: Option<bool>The spoken emoji behavior for the call If not set, uses default behavior based on model of choice If ‘true’, adds spoken emoji formatting for the request. This will replace spoken emojis with the corresponding Unicode symbols in the final transcript. If ‘false’, spoken emojis are not replaced.
enable_speaker_diarization: boolIf ‘true’, enables speaker detection for each recognized word in the top alternative of the recognition result using a speaker_tag provided in the WordInfo. Note: Use diarization_config instead.
diarization_speaker_count: i32If set, specifies the estimated number of speakers in the conversation. Defaults to ‘2’. Ignored unless enable_speaker_diarization is set to true. Note: Use diarization_config instead.
diarization_config: Option<SpeakerDiarizationConfig>Config to enable speaker diarization and set additional parameters to make diarization better suited for your application. Note: When this is enabled, we send all the words from the beginning of the audio for the top alternative in every consecutive STREAMING responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time. For non-streaming requests, the diarization results will be provided only in the top alternative of the FINAL SpeechRecognitionResult.
metadata: Option<RecognitionMetadata>Metadata regarding this request.
model: StringWhich model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the RecognitionConfig.
| Model | Description |
command_and_search |
Best for short queries such as voice commands or voice search. |
phone_call |
Best for audio that originated from a phone call (typically recorded at an 8khz sampling rate). |
video |
Best for audio that originated from video or includes multiple speakers. Ideally the audio is recorded at a 16khz or greater sampling rate. This is a premium model that costs more than the standard rate. |
default |
Best for audio that is not one of the specific audio models. For example, long-form audio. Ideally the audio is high-fidelity, recorded at a 16khz or greater sampling rate. |
use_enhanced: boolSet to true to use an enhanced model for speech recognition.
If use_enhanced is set to true and the model field is not set, then
an appropriate enhanced model is chosen if an enhanced model exists for
the audio.
If use_enhanced is true and an enhanced version of the specified model
does not exist, then the speech is recognized using the standard version
of the specified model.
Implementations
impl RecognitionConfig[src]
pub fn encoding(&self) -> AudioEncoding[src]
Returns the enum value of encoding, or the default if the field is set to an invalid enum value.
pub fn set_encoding(&mut self, value: AudioEncoding)[src]
Sets encoding to the provided enum value.
Trait Implementations
impl Clone for RecognitionConfig[src]
fn clone(&self) -> RecognitionConfig[src]
pub fn clone_from(&mut self, source: &Self)1.0.0[src]
impl Debug for RecognitionConfig[src]
impl Default for RecognitionConfig[src]
fn default() -> RecognitionConfig[src]
impl Message for RecognitionConfig[src]
fn encode_raw<B>(&self, buf: &mut B) where
B: BufMut, [src]
B: BufMut,
fn merge_field<B>(
&mut self,
tag: u32,
wire_type: WireType,
buf: &mut B,
ctx: DecodeContext
) -> Result<(), DecodeError> where
B: Buf, [src]
&mut self,
tag: u32,
wire_type: WireType,
buf: &mut B,
ctx: DecodeContext
) -> Result<(), DecodeError> where
B: Buf,
fn encoded_len(&self) -> usize[src]
fn clear(&mut self)[src]
pub fn encode<B>(&self, buf: &mut B) -> Result<(), EncodeError> where
B: BufMut, [src]
B: BufMut,
pub fn encode_length_delimited<B>(&self, buf: &mut B) -> Result<(), EncodeError> where
B: BufMut, [src]
B: BufMut,
pub fn decode<B>(buf: B) -> Result<Self, DecodeError> where
Self: Default,
B: Buf, [src]
Self: Default,
B: Buf,
pub fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError> where
Self: Default,
B: Buf, [src]
Self: Default,
B: Buf,
pub fn merge<B>(&mut self, buf: B) -> Result<(), DecodeError> where
B: Buf, [src]
B: Buf,
pub fn merge_length_delimited<B>(&mut self, buf: B) -> Result<(), DecodeError> where
B: Buf, [src]
B: Buf,
impl PartialEq<RecognitionConfig> for RecognitionConfig[src]
fn eq(&self, other: &RecognitionConfig) -> bool[src]
fn ne(&self, other: &RecognitionConfig) -> bool[src]
impl StructuralPartialEq for RecognitionConfig[src]
Auto Trait Implementations
impl RefUnwindSafe for RecognitionConfig
impl Send for RecognitionConfig
impl Sync for RecognitionConfig
impl Unpin for RecognitionConfig
impl UnwindSafe for RecognitionConfig
Blanket Implementations
impl<T> Any for T where
T: 'static + ?Sized, [src]
T: 'static + ?Sized,
impl<T> Borrow<T> for T where
T: ?Sized, [src]
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized, [src]
T: ?Sized,
pub fn borrow_mut(&mut self) -> &mut T[src]
impl<T> From<T> for T[src]
impl<T> Instrument for T[src]
pub fn instrument(self, span: Span) -> Instrumented<Self>[src]
pub fn in_current_span(self) -> Instrumented<Self>[src]
impl<T> Instrument for T[src]
pub fn instrument(self, span: Span) -> Instrumented<Self>[src]
pub fn in_current_span(self) -> Instrumented<Self>[src]
impl<T, U> Into<U> for T where
U: From<T>, [src]
U: From<T>,
impl<T> IntoRequest<T> for T[src]
pub fn into_request(self) -> Request<T>[src]
impl<T> ToOwned for T where
T: Clone, [src]
T: Clone,
type Owned = T
The resulting type after obtaining ownership.
pub fn to_owned(&self) -> T[src]
pub fn clone_into(&self, target: &mut T)[src]
impl<T, U> TryFrom<U> for T where
U: Into<T>, [src]
U: Into<T>,
type Error = Infallible
The type returned in the event of a conversion error.
pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>, [src]
U: TryFrom<T>,
type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>[src]
impl<V, T> VZip<V> for T where
V: MultiLane<T>, [src]
V: MultiLane<T>,
impl<T> WithSubscriber for T[src]
pub fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>, [src]
S: Into<Dispatch>,