Struct gapi_grpc::google::cloud::aiplatform::v1beta1::Model [−][src]
A trained machine learning Model.
Fields
name: String
The resource name of the Model.
display_name: String
Required. The display name of the Model. The name can be up to 128 characters long and can be consist of any UTF-8 characters.
description: String
The description of the Model.
predict_schemata: Option<PredictSchemata>
The schemata that describe formats of the Model’s predictions and explanations as given and returned via [PredictionService.Predict][google.cloud.aiplatform.v1beta1.PredictionService.Predict] and [PredictionService.Explain][google.cloud.aiplatform.v1beta1.PredictionService.Explain].
metadata_schema_uri: String
Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
metadata: Option<Value>
Immutable. An additional information about the Model; the schema of the metadata can be found in [metadata_schema][google.cloud.aiplatform.v1beta1.Model.metadata_schema_uri]. Unset if the Model does not have any additional information.
supported_export_formats: Vec<ExportFormat>
Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.
training_pipeline: String
Output only. The resource name of the TrainingPipeline that uploaded this Model, if any.
container_spec: Option<ModelContainerSpec>
Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon [ModelService.UploadModel][google.cloud.aiplatform.v1beta1.ModelService.UploadModel], and all binaries it contains are copied and stored internally by Vertex AI. Not present for AutoML Models.
artifact_uri: String
Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not present for AutoML Models.
supported_deployment_resources_types: Vec<i32>
Output only. When this Model is deployed, its prediction resources are described by the
prediction_resources
field of the [Endpoint.deployed_models][google.cloud.aiplatform.v1beta1.Endpoint.deployed_models] object.
Because not all Models support all resource configuration types, the
configuration types this Model supports are listed here. If no
configuration types are listed, the Model cannot be deployed to an
[Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] and does not support
online predictions ([PredictionService.Predict][google.cloud.aiplatform.v1beta1.PredictionService.Predict] or
[PredictionService.Explain][google.cloud.aiplatform.v1beta1.PredictionService.Explain]). Such a Model can serve predictions by
using a [BatchPredictionJob][google.cloud.aiplatform.v1beta1.BatchPredictionJob], if it has at least one entry each in
[supported_input_storage_formats][google.cloud.aiplatform.v1beta1.Model.supported_input_storage_formats] and
[supported_output_storage_formats][google.cloud.aiplatform.v1beta1.Model.supported_output_storage_formats].
supported_input_storage_formats: Vec<String>
Output only. The formats this Model supports in [BatchPredictionJob.input_config][google.cloud.aiplatform.v1beta1.BatchPredictionJob.input_config]. If [PredictSchemata.instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri] exists, the instances should be given as per that schema.
The possible formats are:
-
jsonl
The JSON Lines format, where each instance is a single line. Uses [GcsSource][google.cloud.aiplatform.v1beta1.BatchPredictionJob.InputConfig.gcs_source]. -
csv
The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses [GcsSource][google.cloud.aiplatform.v1beta1.BatchPredictionJob.InputConfig.gcs_source]. -
tf-record
The TFRecord format, where each instance is a single record in tfrecord syntax. Uses [GcsSource][google.cloud.aiplatform.v1beta1.BatchPredictionJob.InputConfig.gcs_source]. -
tf-record-gzip
Similar totf-record
, but the file is gzipped. Uses [GcsSource][google.cloud.aiplatform.v1beta1.BatchPredictionJob.InputConfig.gcs_source]. -
bigquery
Each instance is a single row in BigQuery. Uses [BigQuerySource][google.cloud.aiplatform.v1beta1.BatchPredictionJob.InputConfig.bigquery_source]. -
file-list
Each line of the file is the location of an instance to process, usesgcs_source
field of the [InputConfig][google.cloud.aiplatform.v1beta1.BatchPredictionJob.InputConfig] object.
If this Model doesn’t support any of these formats it means it cannot be used with a [BatchPredictionJob][google.cloud.aiplatform.v1beta1.BatchPredictionJob]. However, if it has [supported_deployment_resources_types][google.cloud.aiplatform.v1beta1.Model.supported_deployment_resources_types], it could serve online predictions by using [PredictionService.Predict][google.cloud.aiplatform.v1beta1.PredictionService.Predict] or [PredictionService.Explain][google.cloud.aiplatform.v1beta1.PredictionService.Explain].
supported_output_storage_formats: Vec<String>
Output only. The formats this Model supports in [BatchPredictionJob.output_config][google.cloud.aiplatform.v1beta1.BatchPredictionJob.output_config]. If both [PredictSchemata.instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri] and [PredictSchemata.prediction_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.prediction_schema_uri] exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema).
The possible formats are:
-
jsonl
The JSON Lines format, where each prediction is a single line. Uses [GcsDestination][google.cloud.aiplatform.v1beta1.BatchPredictionJob.OutputConfig.gcs_destination]. -
csv
The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses [GcsDestination][google.cloud.aiplatform.v1beta1.BatchPredictionJob.OutputConfig.gcs_destination]. -
bigquery
Each prediction is a single row in a BigQuery table, uses [BigQueryDestination][google.cloud.aiplatform.v1beta1.BatchPredictionJob.OutputConfig.bigquery_destination] .
If this Model doesn’t support any of these formats it means it cannot be used with a [BatchPredictionJob][google.cloud.aiplatform.v1beta1.BatchPredictionJob]. However, if it has [supported_deployment_resources_types][google.cloud.aiplatform.v1beta1.Model.supported_deployment_resources_types], it could serve online predictions by using [PredictionService.Predict][google.cloud.aiplatform.v1beta1.PredictionService.Predict] or [PredictionService.Explain][google.cloud.aiplatform.v1beta1.PredictionService.Explain].
create_time: Option<Timestamp>
Output only. Timestamp when this Model was uploaded into Vertex AI.
update_time: Option<Timestamp>
Output only. Timestamp when this Model was most recently updated.
deployed_models: Vec<DeployedModelRef>
Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.
explanation_spec: Option<ExplanationSpec>
The default explanation specification for this Model.
The Model can be used for [requesting explanation][PredictionService.Explain] after being [deployed][google.cloud.aiplatform.v1beta1.EndpointService.DeployModel] if it is populated. The Model can be used for [batch explanation][BatchPredictionJob.generate_explanation] if it is populated.
All fields of the explanation_spec can be overridden by [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] of [DeployModelRequest.deployed_model][google.cloud.aiplatform.v1beta1.DeployModelRequest.deployed_model], or [explanation_spec][google.cloud.aiplatform.v1beta1.BatchPredictionJob.explanation_spec] of [BatchPredictionJob][google.cloud.aiplatform.v1beta1.BatchPredictionJob].
If the default explanation specification is not set for this Model, this Model can still be used for [requesting explanation][PredictionService.Explain] by setting [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] of [DeployModelRequest.deployed_model][google.cloud.aiplatform.v1beta1.DeployModelRequest.deployed_model] and for [batch explanation][BatchPredictionJob.generate_explanation] by setting [explanation_spec][google.cloud.aiplatform.v1beta1.BatchPredictionJob.explanation_spec] of [BatchPredictionJob][google.cloud.aiplatform.v1beta1.BatchPredictionJob].
etag: String
Used to perform consistent read-modify-write updates. If not set, a blind “overwrite” update happens.
labels: HashMap<String, String>
The labels with user-defined metadata to organize your Models.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
encryption_spec: Option<EncryptionSpec>
Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.
Implementations
impl Model
[src]
pub fn supported_deployment_resources_types(
&self
) -> FilterMap<Cloned<Iter<'_, i32>>, fn(_: i32) -> Option<DeploymentResourcesType>>
[src]
&self
) -> FilterMap<Cloned<Iter<'_, i32>>, fn(_: i32) -> Option<DeploymentResourcesType>>
Returns an iterator which yields the valid enum values contained in supported_deployment_resources_types
.
pub fn push_supported_deployment_resources_types(
&mut self,
value: DeploymentResourcesType
)
[src]
&mut self,
value: DeploymentResourcesType
)
Appends the provided enum value to supported_deployment_resources_types
.
Trait Implementations
impl Clone for Model
[src]
impl Debug for Model
[src]
impl Default for Model
[src]
impl Message for Model
[src]
fn encode_raw<B>(&self, buf: &mut B) where
B: BufMut,
[src]
B: BufMut,
fn merge_field<B>(
&mut self,
tag: u32,
wire_type: WireType,
buf: &mut B,
ctx: DecodeContext
) -> Result<(), DecodeError> where
B: Buf,
[src]
&mut self,
tag: u32,
wire_type: WireType,
buf: &mut B,
ctx: DecodeContext
) -> Result<(), DecodeError> where
B: Buf,
fn encoded_len(&self) -> usize
[src]
fn clear(&mut self)
[src]
pub fn encode<B>(&self, buf: &mut B) -> Result<(), EncodeError> where
B: BufMut,
[src]
B: BufMut,
pub fn encode_length_delimited<B>(&self, buf: &mut B) -> Result<(), EncodeError> where
B: BufMut,
[src]
B: BufMut,
pub fn decode<B>(buf: B) -> Result<Self, DecodeError> where
Self: Default,
B: Buf,
[src]
Self: Default,
B: Buf,
pub fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError> where
Self: Default,
B: Buf,
[src]
Self: Default,
B: Buf,
pub fn merge<B>(&mut self, buf: B) -> Result<(), DecodeError> where
B: Buf,
[src]
B: Buf,
pub fn merge_length_delimited<B>(&mut self, buf: B) -> Result<(), DecodeError> where
B: Buf,
[src]
B: Buf,
impl PartialEq<Model> for Model
[src]
impl StructuralPartialEq for Model
[src]
Auto Trait Implementations
impl RefUnwindSafe for Model
impl Send for Model
impl Sync for Model
impl Unpin for Model
impl UnwindSafe for Model
Blanket Implementations
impl<T> Any for T where
T: 'static + ?Sized,
[src]
T: 'static + ?Sized,
impl<T> Borrow<T> for T where
T: ?Sized,
[src]
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
[src]
T: ?Sized,
pub fn borrow_mut(&mut self) -> &mut T
[src]
impl<T> From<T> for T
[src]
impl<T> Instrument for T
[src]
pub fn instrument(self, span: Span) -> Instrumented<Self>
[src]
pub fn in_current_span(self) -> Instrumented<Self>
[src]
impl<T> Instrument for T
[src]
pub fn instrument(self, span: Span) -> Instrumented<Self>
[src]
pub fn in_current_span(self) -> Instrumented<Self>
[src]
impl<T, U> Into<U> for T where
U: From<T>,
[src]
U: From<T>,
impl<T> IntoRequest<T> for T
[src]
pub fn into_request(self) -> Request<T>
[src]
impl<T> ToOwned for T where
T: Clone,
[src]
T: Clone,
type Owned = T
The resulting type after obtaining ownership.
pub fn to_owned(&self) -> T
[src]
pub fn clone_into(&self, target: &mut T)
[src]
impl<T, U> TryFrom<U> for T where
U: Into<T>,
[src]
U: Into<T>,
type Error = Infallible
The type returned in the event of a conversion error.
pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>
[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>,
[src]
U: TryFrom<T>,
type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>
[src]
impl<V, T> VZip<V> for T where
V: MultiLane<T>,
[src]
V: MultiLane<T>,
impl<T> WithSubscriber for T
[src]
pub fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
[src]
S: Into<Dispatch>,