Q1. You work at an organization that maintains a cloud-based communication platform that integrates conventional chat, voice, and video conferencing into one platform. The audio recordings are stored in Cloud Storage. All recordings have an 8 kHz sample rate and are more than one minute long. You need to implement a new feature in the platform that will automatically transcribe voice call recordings into a text for future applications, such as call summarization and sentiment analysis. How should you implement the voice call transcription feature following Google-recommended best practices?
A.Use the original audio sampling rate, and transcribe the audio by using the Speech-to-Text API with synchronous recognition.
B. Use the original audio sampling rate, and transcribe the audio by using the Speech-to-Text API with asynchronous recognition.
C. Upsample the audio recordings to 16 kHz. and transcribe the audio by using the Speech-to-Text API with synchronous recognition.
D. Upsample the audio recordings to 16 kHz. and transcribe the audio by using the Speech-to-Text API with asynchronous recognition.
Correct Answer: D
Q2. You are implementing a batch inference ML pipeline in Google Cloud. The model was developed by using TensorFlow and is stored in SavedModel format in Cloud Storage. You need to apply the model to a historical dataset that is stored in a BigQuery table. You want to perform inference with minimal effort. What should you do?
A.Import the TensorFlow model by using the create model statement in BigQuery ML. Apply the historical data to the TensorFlow model.
B. Export the historical data to Cloud Storage in Avro format. Configure a Vertex Al batch prediction job to generate predictions for the exported data.
C. Export the historical data to Cloud Storage in CSV format. Configure a Vertex Al batch prediction job to generate predictions for the exported data.
D. Configure and deploy a Vertex Al endpoint. Use the endpoint to get predictions from the historical data inBigQuery.
Correct Answer: B
Q3. You have recently developed a custom model for image classification by using a neural network. You need to automatically identify the values for learning rate, number of layers, and kernel size. To do this, you plan to run multiple jobs in parallel to identify the parameters that optimize performance. You want to minimize custom code development and infrastructure management. What should you do?
A.Create a Vertex Al pipeline that runs different model training jobs in parallel.
B. Train an AutoML image classification model.
C. Create a custom training job that uses the Vertex Al Vizier SDK for parameter optimization.
D. Create a Vertex Al hyperparameter tuning job.
Correct Answer: D
Q4. You have recently developed a new ML model in a Jupyter notebook. You want to establish a reliable and repeatable model training process that tracks the versions and lineage of your model artifacts. You plan to retrain your model weekly. How should you operationalize your training process?
A.1. Create an instance of the CustomTrainingJob class with the Vertex AI SDK to train your model. 2. Using the Notebooks API, create a scheduled execution to run the training code weekly.
B. 1. Create an instance of the CustomJob class with the Vertex AI SDK to train your model. 2. Use the Metadata API to register your model as a model artifact. 3. Using the Notebooks API, create a scheduled execution to run the training code weekly.
C. 1. Create a managed pipeline in Vertex Al Pipelines to train your model by using a Vertex Al CustomTrainingJoOp component. 2. Use the ModelUploadOp component to upload your model to Vertex Al Model Registry. 3. Use Cloud Scheduler and Cloud Functions to run the Vertex Al pipeline weekly.
D. 1. Create a managed pipeline in Vertex Al Pipelines to train your model using a Vertex Al HyperParameterTuningJobRunOp component. 2. Use the ModelUploadOp component to upload your model to Vertex Al Model Registry. 3. Use Cloud Scheduler and Cloud Functions to run the Vertex Al pipeline weekly.
Correct Answer: C
$ 39
jamesale.jhoan –
ExamTopicsPro is indeed a very helpful source to pass the Google Professional Machine Learning Configuring and Operating a Hybrid Cloud with Google Professional Machine Learning Engineer Exam. I’m saying it after my personal experience. Google Professional Machine Learning Engineer Exam actual questions by ExamTopicsPro cover each topic in the syllabus and provide comprehensive knowledge. I give credit of my success to this trusted platform. Keep it up ExamTopicsPro.