Complete the code to create a new Data Fusion instance in GCP.
gcloud datafusion instances create [1] --location=us-central1 --type=BASICThe command requires the instance name as the first argument. 'my-instance' is a valid name.
Complete the code to start a pipeline run in Data Fusion.
gcloud datafusion pipelines run [1] etl-pipeline --location=us-central1The instance name must match the Data Fusion instance where the pipeline is deployed. 'my-instance' is correct here.
Fix the error in the pipeline configuration to specify the source plugin correctly.
"source": {"name": "[1]", "type": "plugin"}
The source plugin for reading from BigQuery is 'BigQuerySource'. 'BigQuerySink' is for writing data.
Fill both blanks to filter records where the 'age' field is greater than 30 in the Wrangler transform.
"filterCondition": "[1] [2] 30"
The filter condition requires the field name 'age' followed by the greater than operator '>'.
Fill all three blanks to define a pipeline schedule that runs daily at midnight.
"schedule": {"type": "[1]", "startTime": "[2]", "interval": "[3]"}
The schedule type for time-based runs is 'time-based'. The start time at midnight is '00:00', and the interval for daily runs is '24h'.