Dataproc lets users run Spark or Hadoop jobs easily on Google Cloud. The user submits a job, which the Dataproc cluster receives and queues. When resources are free, the cluster schedules the job on nodes. The Spark or Hadoop job processes data, for example calculating Pi with SparkPi. After processing, results are saved to cloud storage. The user can then retrieve the results. The job status changes from not submitted, to queued, to running, and finally completed. If resources are busy, the job waits in queue. Changing job parameters like sample size affects processing time. This flow simplifies big data processing by managing infrastructure automatically.