<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Vertex AI on Corebaseit — POS · EMV · Payments · AI</title><link>https://corebaseit.com/tags/vertex-ai/</link><description>Recent content in Vertex AI on Corebaseit — POS · EMV · Payments · AI</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><managingEditor>contact@corebaseit.com (Vincent Bevia)</managingEditor><webMaster>contact@corebaseit.com (Vincent Bevia)</webMaster><lastBuildDate>Fri, 20 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://corebaseit.com/tags/vertex-ai/index.xml" rel="self" type="application/rss+xml"/><item><title>ML Lifecycle Stages — The Cycle That Never Stops</title><link>https://corebaseit.com/generative-ai-foundations-part4/</link><pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate><author>contact@corebaseit.com (Vincent Bevia)</author><guid>https://corebaseit.com/generative-ai-foundations-part4/</guid><description>&lt;h1 id="ml-lifecycle-stages--the-cycle-that-never-stops">ML Lifecycle Stages — The Cycle That Never Stops
&lt;/h1>&lt;p>&lt;em>Part 4 of 4 in the Generative AI Foundations series&lt;/em>&lt;/p>
&lt;hr>
&lt;p>We&amp;rsquo;ve covered &lt;a class="link" href="https://corebaseit.com" target="_blank" rel="noopener"
>the hierarchy&lt;/a>, &lt;a class="link" href="https://corebaseit.com" target="_blank" rel="noopener"
>the landscape&lt;/a>, and &lt;a class="link" href="https://corebaseit.com" target="_blank" rel="noopener"
>the data&lt;/a>. Now let&amp;rsquo;s close the loop with the ML lifecycle itself — because building a model is not a one-time event. It&amp;rsquo;s a cycle. And understanding that cycle is critical, because it&amp;rsquo;s not linear — it&amp;rsquo;s iterative.&lt;/p>
&lt;p>Models degrade. Data drifts. Requirements change. The cycle runs continuously, and each stage feeds back into the others. If you treat model deployment as the finish line, you&amp;rsquo;ve already lost.&lt;/p>
&lt;p>Here&amp;rsquo;s how it breaks down, with the corresponding Google Cloud tooling at each step.&lt;/p>
&lt;!-- IMAGE: ML_Life_Cycles.png -->
&lt;p>&lt;img src="https://corebaseit.com/ML_Life_Cycles.png"
loading="lazy"
alt="ML Lifecycle Stages"
>&lt;/p>
&lt;h2 id="1-data-ingestion-and-preparation">1. Data Ingestion and Preparation
&lt;/h2>&lt;p>The process of collecting, cleaning, and transforming raw data into a usable format for analysis or model training. This is where most of the unglamorous but essential work happens — data engineers will tell you that 80% of any ML project is spent here, and they&amp;rsquo;re not exaggerating.&lt;/p>
&lt;p>This stage is where &lt;a class="link" href="https://corebaseit.com" target="_blank" rel="noopener"
>data quality&lt;/a> matters most. Every characteristic we discussed in the previous post — completeness, consistency, relevance, availability, cost, format — comes into play right here. Get this stage wrong, and everything downstream inherits the debt.&lt;/p>
&lt;p>&lt;strong>Google Cloud Tools:&lt;/strong> BigQuery for data warehousing, Dataflow for data processing pipelines, and Cloud Storage for raw data storage.&lt;/p>
&lt;h2 id="2-model-training">2. Model Training
&lt;/h2>&lt;p>The process of creating your ML model using data. The model learns patterns and relationships from the prepared dataset. This is the compute-intensive stage where your infrastructure investment pays off — or doesn&amp;rsquo;t.&lt;/p>
&lt;p>Training is where the &lt;a class="link" href="https://corebaseit.com" target="_blank" rel="noopener"
>infrastructure layer&lt;/a> from our landscape discussion becomes tangible. You need GPUs, TPUs, or both. You need enough compute to iterate quickly, because model training is inherently experimental — you won&amp;rsquo;t get the architecture, hyperparameters, or data splits right on the first try.&lt;/p>
&lt;p>&lt;strong>Google Cloud Tools:&lt;/strong> Vertex AI for managed training, AutoML for no-code model training, and TPUs/GPUs for accelerated computation.&lt;/p>
&lt;h2 id="3-model-deployment">3. Model Deployment
&lt;/h2>&lt;p>Making a trained model available for use in production environments where it can serve predictions. This is the bridge between &amp;ldquo;it works in a notebook&amp;rdquo; and &amp;ldquo;it works at scale for real users.&amp;rdquo;&lt;/p>
&lt;p>Deployment is where latency, throughput, and reliability become the primary concerns. A model that takes 30 seconds to return a prediction might be fine for batch processing, but it&amp;rsquo;s useless for a real-time customer-facing application. The deployment architecture has to match the serving requirements — and those requirements are almost always more demanding than what you tested in development.&lt;/p>
&lt;p>&lt;strong>Google Cloud Tools:&lt;/strong> Vertex AI Prediction for serving endpoints and Cloud Run for containerised model serving.&lt;/p>
&lt;h2 id="4-model-management">4. Model Management
&lt;/h2>&lt;p>Managing and maintaining your models over time, including versioning, monitoring performance, detecting drift, and retraining. This is the stage most teams underestimate.&lt;/p>
&lt;p>A model that was 95% accurate at launch can degrade to 70% within months if nobody&amp;rsquo;s watching the metrics. The world changes. Customer behaviour shifts. New data patterns emerge that the model has never seen. Continuous monitoring and retraining pipelines are not optional — they&amp;rsquo;re operational necessities.&lt;/p>
&lt;p>This is also where &lt;a class="link" href="https://corebaseit.com" target="_blank" rel="noopener"
>scaffolding&lt;/a> proves its value. The guardrails, logging, and observability infrastructure you built during development become your early warning system in production. Without them, you&amp;rsquo;re flying blind.&lt;/p>
&lt;p>&lt;strong>Google Cloud Tools:&lt;/strong> Vertex AI Model Registry, Vertex AI Model Monitoring, Vertex AI Feature Store, and Vertex AI Pipelines.&lt;/p>
&lt;hr>
&lt;h2 id="the-cycle-continues">The Cycle Continues
&lt;/h2>&lt;p>The arrow from Model Management loops back to Data Ingestion. That&amp;rsquo;s not a diagram convenience — it&amp;rsquo;s the reality of production ML. Monitoring reveals drift, drift triggers retraining, retraining requires fresh data, fresh data requires ingestion and preparation, and the cycle begins again.&lt;/p>
&lt;p>The teams that succeed with ML in production are the ones that design for this cycle from day one. They don&amp;rsquo;t treat it as four sequential steps; they treat it as a continuous loop with automation at every transition point.&lt;/p>
&lt;hr>
&lt;blockquote>
&lt;p>&lt;strong>The bottom line:&lt;/strong> The ML lifecycle is not build-once-deploy-forever. It&amp;rsquo;s a living system that requires continuous investment in data, compute, monitoring, and iteration. Plan for the loop, not just the launch.&lt;/p>&lt;/blockquote>
&lt;hr>
&lt;h2 id="references">References
&lt;/h2>&lt;ol>
&lt;li>
&lt;p>&lt;strong>Google Cloud&lt;/strong> — Generative AI on Vertex AI Documentation.
&lt;a class="link" href="https://docs.cloud.google.com/vertex-ai/generative-ai/docs" target="_blank" rel="noopener"
>https://docs.cloud.google.com/vertex-ai/generative-ai/docs&lt;/a>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Google Cloud&lt;/strong> — Generative AI Beginner&amp;rsquo;s Guide.
&lt;a class="link" href="https://docs.cloud.google.com/vertex-ai/generative-ai/docs/learn/overview" target="_blank" rel="noopener"
>https://docs.cloud.google.com/vertex-ai/generative-ai/docs/learn/overview&lt;/a>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Google Cloud&lt;/strong> — Generative AI Leader Certification.
&lt;a class="link" href="https://cloud.google.com/learn/certification/generative-ai-leader" target="_blank" rel="noopener"
>https://cloud.google.com/learn/certification/generative-ai-leader&lt;/a>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Google Cloud Skills Boost&lt;/strong> — Generative AI Leader Learning Path.
&lt;a class="link" href="https://www.skills.google/paths/1951" target="_blank" rel="noopener"
>https://www.skills.google/paths/1951&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;hr>
&lt;p>&lt;em>This is the final post in the Generative AI Foundations series. Read the full series: &lt;a class="link" href="" >Part 1: The AI Hierarchy&lt;/a> · &lt;a class="link" href="" >Part 2: The Gen AI Landscape&lt;/a> · &lt;a class="link" href="" >Part 3: Data Quality&lt;/a>&lt;/em>&lt;/p>
&lt;hr>
&lt;p>&lt;em>Vincent Bevia — &lt;a class="link" href="https://corebaseit.com" target="_blank" rel="noopener"
>corebaseit.com&lt;/a>&lt;/em>&lt;/p></description></item><item><title>The Generative AI Landscape — A Layered View</title><link>https://corebaseit.com/generative-ai-foundations-part2/</link><pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate><author>contact@corebaseit.com (Vincent Bevia)</author><guid>https://corebaseit.com/generative-ai-foundations-part2/</guid><description>&lt;h1 id="the-generative-ai-landscape--a-layered-view">The Generative AI Landscape — A Layered View
&lt;/h1>&lt;p>&lt;em>Part 2 of 4 in the Generative AI Foundations series&lt;/em>&lt;/p>
&lt;hr>
&lt;p>Now that we&amp;rsquo;ve established &lt;a class="link" href="https://corebaseit.com" target="_blank" rel="noopener"
>the hierarchy&lt;/a> — what AI, ML, Deep Learning, and Gen AI actually &lt;em>are&lt;/em> — let&amp;rsquo;s look at how the Gen AI ecosystem is structured as a working system. Because knowing the theory is one thing; understanding the architecture is what lets you build on it.&lt;/p>
&lt;p>The Gen AI landscape is a stack. Five layers, each dependent on the one below it, with value flowing upward.&lt;/p>
&lt;!-- IMAGE: GenerativeAILandscape.png -->
&lt;p>&lt;img src="https://corebaseit.com/GenerativeAILandscape.png"
loading="lazy"
alt="The Generative AI Landscape — A Layered View"
>&lt;/p>
&lt;h2 id="infrastructure">Infrastructure
&lt;/h2>&lt;p>At its foundation, the Gen AI stack is an infrastructure play. We&amp;rsquo;re talking about the raw computing muscle — GPUs, TPUs, high-throughput servers — along with the storage and orchestration software needed to train and serve models at scale. Without this layer, nothing else exists.&lt;/p>
&lt;p>Google Cloud&amp;rsquo;s AI-optimised infrastructure includes custom TPUs (Tensor Processing Units), high-performance GPUs, and the Hypercomputer architecture designed specifically for AI workloads.&lt;/p>
&lt;p>&lt;strong>Business Implication:&lt;/strong> Organizations don&amp;rsquo;t need to invest in expensive on-premises hardware. Cloud infrastructure provides scalable, pay-as-you-go access to AI computing power.&lt;/p>
&lt;h2 id="models">Models
&lt;/h2>&lt;p>Sitting on top of that infrastructure is the &lt;strong>model&lt;/strong> itself: a complex algorithm trained on massive datasets, learning statistical patterns and relationships that allow it to generate text, translate languages, answer questions, and produce content that, at its best, feels indistinguishable from human output. The model is the engine, but an engine alone doesn&amp;rsquo;t get you anywhere.&lt;/p>
&lt;p>This layer includes foundation models (Gemini, Gemma, Imagen, Veo), open-source models, and third-party models available through platforms like Vertex AI Model Garden.&lt;/p>
&lt;p>&lt;strong>Business Implication:&lt;/strong> Organizations can choose from pre-built models (reducing time to market) or train custom models. Model Garden provides access to 150+ models, giving flexibility across use cases.&lt;/p>
&lt;h2 id="platform">Platform
&lt;/h2>&lt;p>That&amp;rsquo;s where the &lt;strong>platform layer&lt;/strong> comes in. Think of it as the middleware — APIs, data management pipelines, deployment tooling — that bridges the gap between a trained model and the software that actually consumes it. It abstracts away the infrastructure complexity and gives developers a clean interface to build on.&lt;/p>
&lt;p>Vertex AI is Google Cloud&amp;rsquo;s unified ML platform for this layer, providing tools for the entire ML workflow: build, train, deploy, and manage.&lt;/p>
&lt;p>&lt;strong>Business Implication:&lt;/strong> Platforms abstract away infrastructure complexity, enabling teams to focus on building AI solutions rather than managing servers. Low-code/no-code tools democratise access to AI.&lt;/p>
&lt;h2 id="agents">Agents
&lt;/h2>&lt;p>Next, the &lt;strong>agent&lt;/strong>. This is where things get interesting. An agent is a piece of software that doesn&amp;rsquo;t just call a model — it &lt;em>reasons&lt;/em> over inputs, selects tools, and iterates toward a goal. It&amp;rsquo;s the autonomous decision-making layer, and it&amp;rsquo;s the frontier everyone is racing toward right now.&lt;/p>
&lt;p>Agents consist of a reasoning loop, tools, and a model. They can be deterministic (predefined paths), generative (LLM-powered natural language), or hybrid (combining both). Examples include customer service agents, code agents, data agents, and security agents.&lt;/p>
&lt;p>&lt;strong>Business Implication:&lt;/strong> Agents represent the next evolution of AI applications, capable of autonomous task completion. They can significantly reduce human workload in customer support, data analysis, and software development.&lt;/p>
&lt;h2 id="applications">Applications
&lt;/h2>&lt;p>Finally, at the top of the stack, sits the &lt;strong>Gen-AI-powered application&lt;/strong> — the user-facing layer. This is what end users actually see and interact with. It&amp;rsquo;s the product surface that translates all the layers beneath it into something useful, intuitive, and accessible.&lt;/p>
&lt;p>Examples include the Gemini app, Gemini for Google Workspace, and custom enterprise applications built with Vertex AI.&lt;/p>
&lt;p>&lt;strong>Business Implication:&lt;/strong> Applications deliver the tangible business value of AI. They translate the underlying technology into tools that employees, customers, and partners can use directly.&lt;/p>
&lt;hr>
&lt;h2 id="the-missing-piece-scaffolding">The Missing Piece: Scaffolding
&lt;/h2>&lt;!-- IMAGE: CoreLayer_GenAI.png -->
&lt;p>&lt;img src="https://corebaseit.com/CoreLayer_GenAI.png"
loading="lazy"
alt="Core Layers of the Gen AI Landscape"
>&lt;/p>
&lt;p>But here&amp;rsquo;s the thing most people miss: none of these layers work in isolation. What connects them — what makes the whole stack operational — is &lt;strong>scaffolding&lt;/strong>.&lt;/p>
&lt;p>Scaffolding is the surrounding code, orchestration logic, and glue infrastructure that wraps around a foundation model to turn a raw API call into a functioning system. We&amp;rsquo;re talking about prompt templates, memory management, tool routing, output parsing, guardrails, retry logic, error handling — everything that sits between &amp;ldquo;call the model&amp;rdquo; and &amp;ldquo;deliver a reliable result to the user.&amp;rdquo;&lt;/p>
&lt;p>Without scaffolding, you have a model that can generate text. &lt;em>With&lt;/em> scaffolding, you have an application that can reason, recover from errors, maintain context across turns, and chain multiple steps together toward a goal. It&amp;rsquo;s what makes agents actually work in production.&lt;/p>
&lt;p>If you&amp;rsquo;re an engineer, scaffolding is where you&amp;rsquo;ll spend most of your time. If you&amp;rsquo;re a leader, it&amp;rsquo;s the part of the stack you need to budget for — because the model is the easy part. Making it reliable, safe, and operational at scale? That&amp;rsquo;s scaffolding.&lt;/p>
&lt;hr>
&lt;blockquote>
&lt;p>&lt;strong>Remember the five layers bottom-to-top:&lt;/strong> Infrastructure → Models → Platforms → Agents → Applications. Each layer depends on the one below it, and the value flows upward.&lt;/p>&lt;/blockquote>
&lt;hr>
&lt;p>&lt;em>Next in the series: &lt;a class="link" href="" >Data Quality and Accessibility&lt;/a>&lt;/em>&lt;/p>
&lt;hr>
&lt;p>&lt;em>Vincent Bevia — &lt;a class="link" href="https://corebaseit.com" target="_blank" rel="noopener"
>corebaseit.com&lt;/a>&lt;/em>&lt;/p></description></item></channel></rss>