Module gapi_grpc::google::assistant::embedded::v1alpha2    [−][src]
Modules
| assist_config | |
| assist_request | |
| assist_response | |
| audio_in_config | |
| audio_out_config | |
| device_location | |
| dialog_state_out | |
| embedded_assistant_client | Generated client implementations.  | 
| screen_out | |
| screen_out_config | 
Structs
| AssistConfig | Specifies how to process the   | 
| AssistRequest | The top-level message sent by the client. Clients must send at least two, and
typically numerous   | 
| AssistResponse | The top-level message received by the client. A series of one or more
  | 
| AudioInConfig | Specifies how to process the   | 
| AudioOut | The audio containing the Assistant’s response to the query. Sequential chunks
of audio data are received in sequential   | 
| AudioOutConfig | Specifies the desired format for the server to use when it returns
  | 
| DebugConfig | Debugging parameters for the current request.  | 
| DebugInfo | Debug info for developer. Only returned if request set   | 
| DeviceAction | The response returned to the device if the user has triggered a Device
Action. For example, a device which supports the query Turn on the light
would receive a   | 
| DeviceConfig | Required Fields that identify the device to the Assistant.  | 
| DeviceLocation | There are three sources of locations. They are used with this precedence:  | 
| DialogStateIn | Provides information about the current dialog state.  | 
| DialogStateOut | The dialog state resulting from the user’s query. Multiple of these messages may be received.  | 
| ScreenOut | The Assistant’s visual output response to query. Enabled by
  | 
| ScreenOutConfig | Specifies the desired format for the server to use when it returns
  | 
| SpeechRecognitionResult | The estimated transcription of a phrase the user has spoken. This could be a single segment or the full guess of the user’s spoken query.  |