• Valley Recap
  • Posts
  • 🪫 AI Factories Power CRUNCH 🐝 Still Buzzing From AI INFRA 🤝 Building Bridges to Academia 💃 Last Events of 2026

🪫 AI Factories Power CRUNCH 🐝 Still Buzzing From AI INFRA 🤝 Building Bridges to Academia 💃 Last Events of 2026

In partnership with

🪫 AI Factories Power CRUNCH

At AI INFRA SUMMIT 4, one message dominated every conversation. Power has become the defining constraint for artificial intelligence. Data centers built for cloud and enterprise workloads are nearing their physical limits, and energy availability now dictates how far and how fast AI can expand.

Energy as the Bottleneck

Average rack power consumption has jumped from around 10 kilowatts to more than 250. The Department of Energy projects roughly 200 gigawatts of behind-the-meter demand by 2030, a number comparable to the total capacity of several national grids. This demand surge is reshaping infrastructure strategy across the industry.

Existing data centers cannot absorb this scale. Many operators are now building their own sources of generation or colocating next to them. Private microgrids, gas turbines, and modular nuclear projects are moving from pilot stage to standard practice. The relationship between compute and electricity has become direct. Each megawatt determines how much intelligence can run.

The Constant Load of Inference

The shift from training to inference has intensified the energy problem. Training occurs in limited bursts. Inference operates continuously. Once deployed, models remain active at all hours, generating responses, predictions, and analysis in real time.

The International Energy Agency expects global data-center electricity use to reach roughly 950 terawatt-hours by 2030, nearly twice today’s level. Much of this increase comes from inference systems running at the edge, close to users and devices. These distributed workloads require consistent power delivery rather than peak-and-valley cycles. Efficiency now determines profitability, as small improvements in cooling, memory flow, or power management translate into major cost reductions over time.

Redefining Infrastructure Priorities

Energy has become the foundation of competitiveness in AI infrastructure.

  • Location influences success. Regions with renewable energy, flexible grid capacity, or lower transmission losses are drawing the next generation of builds.

  • Partnerships between data-center developers and utilities are expanding, ensuring stable supply and mutual investment in local power systems.

  • Metrics are evolving. Operators measure success through watts per operation, energy reuse, and uptime per kilowatt instead of only throughput or model scale.

These adjustments mark a shift from digital optimization to physical planning. The efficiency of electrons now matters as much as the efficiency of code.

Looking Forward

The discussions made one reality clear. The future of AI infrastructure depends on how the industry manages energy, not only on how it manages data. Building smarter systems will require equal focus on generation, cooling, and consumption. The race for progress no longer starts with silicon; it starts with power.

Building Bridges: Industry to Academia

We were honored to keynote the San Jose State Research & Innovation Exchange Series (RIES), where we joined the university’s AI leadership to discuss the future of workforce development. Together, we explored actionable ways to strengthen alignment between industry and academia—a collaboration that is increasingly essential as AI transforms every sector.

During the session, Bill Barry delivered the State of Infra presentation, highlighting the industry’s pressing need for deeper partnership, shared knowledge, and a more seamless talent pipeline. He outlined how tighter collaboration can help build a bridge back to academia, ensuring that the next generation of technical talent is prepared for the rapidly evolving AI landscape.

The conversation underscored a shared commitment: working together to equip tomorrow’s workforce through stronger, more intentional industry–university collaboration.

Upcoming Events

We’re heading into the final stretch of 2026 with a powerful lineup of events centered on real-world enterprise AI adoption.

On November 21, join 50–75 technical builders and leaders for bold, kinetic conversation and curated connections inside McKinsey’s Redwood City office.

Then, Supermicro and AMD will host Enterprise AI Day at the stunning Hotel Valencia Santana Row—a half-day gathering featuring enterprise-leading panels and holiday-season networking.

Finally, on December 16, we’re teaming up with the AI Tech Leaders Club to close out the year with a hands-on workshop designed specifically for senior Technical Decision Makers in Snowflakes office in Menlo Park.

Bay Area Startups Collectively Secured $4.06B this week

Bay Area startups closed on $4.06B in fundings in week 2 of November, taking the month-to-date to just over $6B. One billion-dollar-plus mega deal and five more megadeals all contributed to that total, with Anysphere's $2.3B Series D the largest this week – and so far this month.

Ballooning valuations: Valuations at the AI frontier model companies continue to rise. OpenAI's August valuation of $300B increased by $200B in two months to $500B for an October secondary sale. Anthropic shot up from $61.5B to $183B, almost 3x, in six months. And now former OpenAI exec Mira Murati is reportedly raising again for her company, Thinking Machines Lab at a $50B valuation, quadrupling the company's July valuation. It's not surprising that valuations for AI companies across the board and at all stages continue to rise as well.

The Venture Market Report for Q3 2025 is now posted for browsing online. Curious about fundings and venture funds raised? And money coming back into the valley through M&A and IPOS? The data for the Venture Market Report comes directly from the Pulse of the Valley weekday newsletter and summarizes fundings by sector and series, acquisitions and IPOs (with details) plus new funds raised by investors and new VC firms launched. Check it out and sign up for a 1 week free trial of the Pulse while you're there.

For sales people and service providers interested in who's just closed new fundings: The Pulse of the Valley weekday newsletter keeps you current with capital moving through the startup ecosystem in SV and Norcal. Startups raising rounds and their investors; investors raising and closing funds; liquidity (M&A and IPOs) and the senior executives on both sides starting new positions. Details include investor and executive connections + contact information on tens of thousands of fundings. Check it out with a free trial, sign up here.

Follow us on LinkedIn to stay on top of what's happening in 2025 in startup fundings, M&A and IPOs, VC fundraising plus new executive hires & investor moves.

Early Stage:

  • Majestic Labs closed a $100M Series A, builds power-efficient AI servers for the largest and most advanced AI workloads.

  • Parallel Web Systems closed a $100M Series A, creating the interfaces, infrastructure, and economic models for AI agents to thrive on the open web.

  • WisdomAI closed a $50M Series A, building the first AI Data Analyst that delivers trusted, accurate, and proactive insights for the enterprise.

  • Quino Energy closed a $10M Series A, developing water-based flow batteries that store electrical energy in organic molecules called quinones, for commercial and grid applications.

  • Terranova closed a $7M Seed, rewriting the adaptation playbook by reshaping terrain itself with robots the size of cars that inject a wood slurry deep underground.

Growth Stage:

  • Anysphere closed a $2.3B Series D, building the engineer of the future: a human-AI programmer that's an order of magnitude more effective than any one programmer.

  • d-Matrix closed a $275M Series D, pioneering accelerated computing, breaking through the limits of latency, cost and energy to deliver fast, sustainable AI inference at data center scale.

  • Alembic Technologies closed a $145M Series B, provides AI-powered marketing analytics for C-suite executives.

  • ClickHouse closed a $94M Series C, a fast, open-source columnar database management system built for real-time data processing and analytics at scale.

  • Gamma closed a $68M Series B, a new medium for presenting ideas, powered by AI - more visual than a doc, more collaborative than a slide deck and more interactive than a video.

Mako

Mako is reshaping how AI teams unlock performance from their GPU infrastructure. Their platform generates and tunes high-performance GPU code automatically, helping companies boost inference speed and cut compute costs without rewriting workloads.

What Mako Delivers
• Automated GPU kernel generation
• Higher throughput on NVIDIA and AMD hardware
• Faster inference without changing your model
• Performance engineering as a service

Why It Matters
Most teams leave performance on the table. As models grow and GPU availability tightens, efficiency gains create real competitive advantage. Mako helps teams get more from the hardware they already run.

Who Uses It
Fast-scaling AI startups, teams building agentic workloads, and any company looking to raise throughput and lower spend across cloud or on-prem GPUs.

Learn more at mako.dev.

Your Feedback Matters!

Your feedback is crucial in helping us refine our content and maintain the newsletter's value for you and your fellow readers. We welcome your suggestions on how we can improve our offering. [email protected] 

Logan Lemery
Head of Content // Team Ignite

WhatsApp Business Calls, Now in Synthflow

Billions of customers already use WhatsApp to reach businesses they trust. But here’s the gap: 65% still prefer voice for urgent issues, while 40% of calls go unanswered — costing $100–$200 in lost revenue each time. That’s trust and revenue walking out the door.

With Synthflow, Voice AI Agents can now answer WhatsApp calls directly, combining support, booking, routing, and follow-ups in one conversation.

It’s not just answering calls — it’s protecting revenue and trust where your customers already are.

One channel, zero missed calls.