Complete the code to create a Cloud Storage bucket for raw data ingestion.
resource "google_storage_bucket" "raw_data_bucket" { name = "my-raw-data-[1]" location = "US" }
The bucket name should include a descriptive term like 'data' to indicate its purpose for raw data ingestion.
Complete the code to define a Pub/Sub topic for streaming data ingestion.
resource "google_pubsub_topic" "stream_topic" { name = "[1]-stream-topic" }
The topic name should clearly indicate it is related to data streaming, so 'data' is a good base name.
Fix the error in the Dataflow job configuration to specify the correct runner.
resource "google_dataflow_job" "etl_job" { name = "etl-job" template_gcs_path = "gs://dataflow-templates/latest/Word_Count" [1] = "DataflowRunner" region = "us-central1" }
The correct property to specify the runner in a Dataflow job is 'runner'.
Fill both blanks to create a BigQuery dataset with the correct location and description.
resource "google_bigquery_dataset" "analytics_dataset" { dataset_id = "analytics" location = "[1]" description = "[2]" }
The dataset location should be 'US' for this example, and the description should clearly state its purpose as 'Analytics data storage'.
Fill all three blanks to define a Cloud Composer environment with the correct name, location, and machine type.
resource "google_composer_environment" "data_pipeline_env" { name = "[1]" region = "[2]" config { node_config { machine_type = "[3]" } } }
The environment name should be descriptive like 'data-pipeline-env', the region 'us-central1' is a common GCP region, and 'n1-standard-1' is a typical machine type for Composer nodes.