Skip to main content

Current State of AI in Programming: Supervised Assistance

My experience with AI in programming, particularly with tools like AI copilots, leads me to conclude that we're currently in a supervised programming phase, not yet at "vibe programming." This assumes that programmers can provide clear and comprehensive descriptions of their requirements.

Where AI Excels: Auto-Completion

The most helpful and efficient mode I've found for AI copilots is auto-completion. In this scenario, I estimate around 70-80% of generated code is successfully used. This indicates AI's strong capability in assisting with direct, context-aware code suggestions.

Challenges with Autonomous ("Vibe") Programming

Attempting to use an AI copilot for fully autonomous "vibe programming" is generally inefficient for several reasons:

  • Iteration Overhead: Achieving an acceptable solution often requires numerous iterations to fix syntax errors and correctly implement logic.

  • Increased Processing Time: The processing time for these more complex generation tasks is significantly longer.

  • Rapid Quota Consumption: Premium quotas are used up very quickly. For example, in a recent project where I used a copilot to generate unit tests, the usage limit was reached within two days.

  • Context Window Limitations: "Vibe programming" demands full context and examples. Due to the limited context window, AI often struggles to grasp external dependencies like the "gofr" framework, including:

    • Solution architecture/project structure/layers

    • Initialization procedures

    • Database connection creation

    • HTTP client initialization

    • Function signatures

    To overcome this, a good implementation or example must be provided for the AI to complete the rest of the code effectively.

Potential Pitfalls of AI-Generated Code

  • Over-fitting to Test Cases: Similar to the concept of over-fitting in machine learning, AI sometimes generates code that specifically matches test cases rather than providing a more generic or robust solution.

  • Codebase Degradation: If not closely monitored, AI can introduce "garbage" code, leading to an unmanageable and deteriorating codebase over time.

Comments

Popular posts from this blog

Long Term Memory Technology Comparison

Let’s compare traditional databases , graph databases , and LLM network memory in terms of accuracy , structured data , and retrieval . 1. Accuracy Aspect Traditional Database Storage Graph Database (e.g., Neo4j) LLM Network Memory Definition Data is stored explicitly in tables, rows, and columns. Data is stored as nodes, edges, and properties, representing relationships. Data is encoded in the weights of a neural network as patterns and relationships. Accuracy High : Data is stored exactly as input, so retrieval is precise and deterministic. High : Relationships and connections are explicitly stored, enabling precise queries. Variable : LLMs generate responses based on learned patterns, which can lead to errors or approximations. Example If you store "2 + 2 = 4" in a database, it will always return "4" when queried. If you store "Alice is friends with Bob," the relationship is explicitly stored and retrievable. An LLM might c...

Economic Impact of New Tariffs on Canada, Mexico, China, and Europe

Tariffs as Federal Income 1. Tariff Revenue from Canada, Mexico, and China Using 2024 U.S. import projections (based on 2023 data from the U.S. Census Bureau and Trading Economics): Country 2024 Est. Imports (USD) Tariff Rate Revenue Generated Canada $420 billion 25% $105 billion Mexico $400 billion 25% $100 billion China $500 billion 10% + 10%* $100 billion Total $305 billion *China’s tariff is assumed to be a phased 10% + 10% (total 20%). 2. Tariff Revenue if Applied to All European Countries (25%) The U.S. imported $620 billion from the EU in 2023. Assuming 3% growth in 2024: 2024 EU Imports : $638 billion Revenue at 25% Tariff : $638B × 0.25 = $159.5 billion Combined Total Revenue (Canada, Mexico, China, EU) : $305B + $159.5B = $464.5 billion Spending the Extra Tariff Income 1. Trump’s Promised Tax Reductions Corporate Tax Cuts (21% → 15%) Current Federal Corporate Tax Revenue (2023) : $425 billion Projected Taxable I...

The Evolving Nexus: Risks of Stablecoin Integration with U.S. Sovereign Debt

I. Executive Summary This report examines the multifaceted risks arising from the increasing integration of stablecoins with U.S. sovereign debt. While direct "conversion" of outstanding debt into stablecoins is not the primary mechanism, the significant and growing role of stablecoins as purchasers of U.S. Treasuries introduces novel systemic vulnerabilities. The analysis highlights critical financial stability concerns, including de-pegging events and liquidity risks, alongside profound challenges to monetary policy and sovereign control. Furthermore, the fragmented regulatory landscape and persistent operational vulnerabilities amplify these risks. A robust, harmonized regulatory framework and proactive policy adaptation are imperative to mitigate these emerging challenges while strategically leveraging stablecoins' potential benefits for financial efficiency and the global standing of the U.S. dollar. Podcast II. Introduction: Stablecoins and the Landscape of Sovereig...