<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>BigQuery on Corebaseit — POS · EMV · Payments · AI</title><link>https://corebaseit.com/tags/bigquery/</link><description>Recent content in BigQuery on Corebaseit — POS · EMV · Payments · AI</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><managingEditor>contact@corebaseit.com (Vincent Bevia)</managingEditor><webMaster>contact@corebaseit.com (Vincent Bevia)</webMaster><lastBuildDate>Fri, 20 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://corebaseit.com/tags/bigquery/index.xml" rel="self" type="application/rss+xml"/><item><title>ML Lifecycle Stages — The Cycle That Never Stops</title><link>https://corebaseit.com/generative-ai-foundations-part4/</link><pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate><author>contact@corebaseit.com (Vincent Bevia)</author><guid>https://corebaseit.com/generative-ai-foundations-part4/</guid><description>&lt;h1 id="ml-lifecycle-stages--the-cycle-that-never-stops">ML Lifecycle Stages — The Cycle That Never Stops
&lt;/h1>&lt;p>&lt;em>Part 4 of 4 in the Generative AI Foundations series&lt;/em>&lt;/p>
&lt;hr>
&lt;p>We&amp;rsquo;ve covered &lt;a class="link" href="https://corebaseit.com" target="_blank" rel="noopener"
>the hierarchy&lt;/a>, &lt;a class="link" href="https://corebaseit.com" target="_blank" rel="noopener"
>the landscape&lt;/a>, and &lt;a class="link" href="https://corebaseit.com" target="_blank" rel="noopener"
>the data&lt;/a>. Now let&amp;rsquo;s close the loop with the ML lifecycle itself — because building a model is not a one-time event. It&amp;rsquo;s a cycle. And understanding that cycle is critical, because it&amp;rsquo;s not linear — it&amp;rsquo;s iterative.&lt;/p>
&lt;p>Models degrade. Data drifts. Requirements change. The cycle runs continuously, and each stage feeds back into the others. If you treat model deployment as the finish line, you&amp;rsquo;ve already lost.&lt;/p>
&lt;p>Here&amp;rsquo;s how it breaks down, with the corresponding Google Cloud tooling at each step.&lt;/p>
&lt;!-- IMAGE: ML_Life_Cycles.png -->
&lt;p>&lt;img src="https://corebaseit.com/ML_Life_Cycles.png"
loading="lazy"
alt="ML Lifecycle Stages"
>&lt;/p>
&lt;h2 id="1-data-ingestion-and-preparation">1. Data Ingestion and Preparation
&lt;/h2>&lt;p>The process of collecting, cleaning, and transforming raw data into a usable format for analysis or model training. This is where most of the unglamorous but essential work happens — data engineers will tell you that 80% of any ML project is spent here, and they&amp;rsquo;re not exaggerating.&lt;/p>
&lt;p>This stage is where &lt;a class="link" href="https://corebaseit.com" target="_blank" rel="noopener"
>data quality&lt;/a> matters most. Every characteristic we discussed in the previous post — completeness, consistency, relevance, availability, cost, format — comes into play right here. Get this stage wrong, and everything downstream inherits the debt.&lt;/p>
&lt;p>&lt;strong>Google Cloud Tools:&lt;/strong> BigQuery for data warehousing, Dataflow for data processing pipelines, and Cloud Storage for raw data storage.&lt;/p>
&lt;h2 id="2-model-training">2. Model Training
&lt;/h2>&lt;p>The process of creating your ML model using data. The model learns patterns and relationships from the prepared dataset. This is the compute-intensive stage where your infrastructure investment pays off — or doesn&amp;rsquo;t.&lt;/p>
&lt;p>Training is where the &lt;a class="link" href="https://corebaseit.com" target="_blank" rel="noopener"
>infrastructure layer&lt;/a> from our landscape discussion becomes tangible. You need GPUs, TPUs, or both. You need enough compute to iterate quickly, because model training is inherently experimental — you won&amp;rsquo;t get the architecture, hyperparameters, or data splits right on the first try.&lt;/p>
&lt;p>&lt;strong>Google Cloud Tools:&lt;/strong> Vertex AI for managed training, AutoML for no-code model training, and TPUs/GPUs for accelerated computation.&lt;/p>
&lt;h2 id="3-model-deployment">3. Model Deployment
&lt;/h2>&lt;p>Making a trained model available for use in production environments where it can serve predictions. This is the bridge between &amp;ldquo;it works in a notebook&amp;rdquo; and &amp;ldquo;it works at scale for real users.&amp;rdquo;&lt;/p>
&lt;p>Deployment is where latency, throughput, and reliability become the primary concerns. A model that takes 30 seconds to return a prediction might be fine for batch processing, but it&amp;rsquo;s useless for a real-time customer-facing application. The deployment architecture has to match the serving requirements — and those requirements are almost always more demanding than what you tested in development.&lt;/p>
&lt;p>&lt;strong>Google Cloud Tools:&lt;/strong> Vertex AI Prediction for serving endpoints and Cloud Run for containerised model serving.&lt;/p>
&lt;h2 id="4-model-management">4. Model Management
&lt;/h2>&lt;p>Managing and maintaining your models over time, including versioning, monitoring performance, detecting drift, and retraining. This is the stage most teams underestimate.&lt;/p>
&lt;p>A model that was 95% accurate at launch can degrade to 70% within months if nobody&amp;rsquo;s watching the metrics. The world changes. Customer behaviour shifts. New data patterns emerge that the model has never seen. Continuous monitoring and retraining pipelines are not optional — they&amp;rsquo;re operational necessities.&lt;/p>
&lt;p>This is also where &lt;a class="link" href="https://corebaseit.com" target="_blank" rel="noopener"
>scaffolding&lt;/a> proves its value. The guardrails, logging, and observability infrastructure you built during development become your early warning system in production. Without them, you&amp;rsquo;re flying blind.&lt;/p>
&lt;p>&lt;strong>Google Cloud Tools:&lt;/strong> Vertex AI Model Registry, Vertex AI Model Monitoring, Vertex AI Feature Store, and Vertex AI Pipelines.&lt;/p>
&lt;hr>
&lt;h2 id="the-cycle-continues">The Cycle Continues
&lt;/h2>&lt;p>The arrow from Model Management loops back to Data Ingestion. That&amp;rsquo;s not a diagram convenience — it&amp;rsquo;s the reality of production ML. Monitoring reveals drift, drift triggers retraining, retraining requires fresh data, fresh data requires ingestion and preparation, and the cycle begins again.&lt;/p>
&lt;p>The teams that succeed with ML in production are the ones that design for this cycle from day one. They don&amp;rsquo;t treat it as four sequential steps; they treat it as a continuous loop with automation at every transition point.&lt;/p>
&lt;hr>
&lt;blockquote>
&lt;p>&lt;strong>The bottom line:&lt;/strong> The ML lifecycle is not build-once-deploy-forever. It&amp;rsquo;s a living system that requires continuous investment in data, compute, monitoring, and iteration. Plan for the loop, not just the launch.&lt;/p>&lt;/blockquote>
&lt;hr>
&lt;h2 id="references">References
&lt;/h2>&lt;ol>
&lt;li>
&lt;p>&lt;strong>Google Cloud&lt;/strong> — Generative AI on Vertex AI Documentation.
&lt;a class="link" href="https://docs.cloud.google.com/vertex-ai/generative-ai/docs" target="_blank" rel="noopener"
>https://docs.cloud.google.com/vertex-ai/generative-ai/docs&lt;/a>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Google Cloud&lt;/strong> — Generative AI Beginner&amp;rsquo;s Guide.
&lt;a class="link" href="https://docs.cloud.google.com/vertex-ai/generative-ai/docs/learn/overview" target="_blank" rel="noopener"
>https://docs.cloud.google.com/vertex-ai/generative-ai/docs/learn/overview&lt;/a>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Google Cloud&lt;/strong> — Generative AI Leader Certification.
&lt;a class="link" href="https://cloud.google.com/learn/certification/generative-ai-leader" target="_blank" rel="noopener"
>https://cloud.google.com/learn/certification/generative-ai-leader&lt;/a>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Google Cloud Skills Boost&lt;/strong> — Generative AI Leader Learning Path.
&lt;a class="link" href="https://www.skills.google/paths/1951" target="_blank" rel="noopener"
>https://www.skills.google/paths/1951&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;hr>
&lt;p>&lt;em>This is the final post in the Generative AI Foundations series. Read the full series: &lt;a class="link" href="" >Part 1: The AI Hierarchy&lt;/a> · &lt;a class="link" href="" >Part 2: The Gen AI Landscape&lt;/a> · &lt;a class="link" href="" >Part 3: Data Quality&lt;/a>&lt;/em>&lt;/p>
&lt;hr>
&lt;p>&lt;em>Vincent Bevia — &lt;a class="link" href="https://corebaseit.com" target="_blank" rel="noopener"
>corebaseit.com&lt;/a>&lt;/em>&lt;/p></description></item></channel></rss>