• HappyFrog@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    4
    ·
    2 months ago

    Why do ai peeps got to make these strange names for essentially just giving more text to an llm. It’s not MCP, it’s just searching an online database for more text. RAG is just searching a local database for more text, but fancier. There is functionally no difference between an “ai agent” and the ai you talk to.

    • Venator@lemmy.nz
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      2 months ago

      Almost everyone comes up with shorthand names or acronyms for things they type or say frequently.

      But yeah, MCP is just an API where the API docs are taylored to try to help an LLM to use useful inputs, and it seems like they’re making up new terms for existing things to try to obfuscate that it already exists under another name 😅

      Not sure about RAG, but sounds like its just an API that accesses help docs or similar… 😅

      • 8uurg@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        2 months ago

        RAG is Retrieval Augmented Generation. It is a fancy way of saying “we’ve tacked a search engine onto the LLM so that it can query for and use the text of actual documents when generating text, so that the output is more likely to be correct and grounded in reality.”

        And yeah, MCP stands for Model Context Protocol, and is essentially an API format optimized for LLMs, as you’ve said, to defer to something else to do the work. This can be a (RAG like) search engine lookup, using a calculator, or something else entirely.

        LLMs suck at doing a lot of stuff reliably (like calculations, making statements relating to recent events, …), but they turn out to be quite a useful tool for translating between human and machine, and reasonably capable of stringing things together to get an answer.

    • Wirlocke@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      I don’t know if VScode Copilot defines MCP differently but it’s more about giving the llm api access to do things. Like letting the llm make github git commits for example.

    • Fiery@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      2 months ago

      Why does Nvidia even boast about their new GPUs? They’re doing the same calculations as the old generation, I fail to see the difference between Blackwell and the new gen they just announced. /s

      There very much is a difference between a generic chatbot and one that can use the tools you listed. And with how LLM’s work it’s not just ‘faster’ like my answer above, but actually more qualitative results.

    • lIlIlIlIlIlIl@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      MCP servers add tooling and abilities that wouldn’t otherwise exist. Not the same as just a larger context window

    • scytale@piefed.zip
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 months ago

      no difference between an “ai agent” and the ai you talk to

      Isn’t an agent locally installed on your system though? There’s some functional difference on that part I think. But on the other hand, I guess you can also say browsing lemmy is the same thing regardless if you’re on a mobile browser or using a hard client mobile app.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    As an engineer who spent most of my life coding fintech services… You do NOT want to meet finra, finsen, or any of those agencies. Best case you are sued for millions. The step after that is immediate arrest.