• 2 Posts
  • 16 Comments
Joined 2 months ago
cake
Cake day: September 18th, 2025

help-circle



  • Within those 1 billion years of evolutionary experience came the experience of interacting with objects in the most primitive way, which is also part of human intelligence.

    Human intelligence goes far beyond this. Almost all animals are capable of this behavior to some degree, with some going as far as using tools.

    There are also a significant number of animal species that have an even stronger intuitive knowledge of this than humans. For example, a monkey knows how to swing through trees at a rapid pace. And yet, we know that humans are still more intelligent. This is an example of alignment: these animals are missing a broad range of human knowledge, but have a large amount of knowledge within certain areas (intuitive physics, body coordination).

    A monkey has great intuitive physics and body coordination knowledge, but they can not speak English, closely follow instructions, or write code. Does this mean that AI is smarter than a monkey? No. AIs are able to do this because they are more closely aligned with the skills we need and the skills we trained them for. This alignment is what makes them useable at all when they contain so little intelligence.

    I agree with this:

    we cannot really turn all of those skills into a single metric

    I’m not certain philosophically, but I think the existence of a single intelligence metric would inherently mean “know-all”, at least in the way it’s typically thought of. The “single metric” doesn’t exist in reality. Our estimates of this “single metric” are not actually a single metric, but a combination of multiple specific skill metrics: skills that we consider to be valuable in our society (i.e. social skills), and skills that we consider to be “innate to” or “expected of” the human brain (i.e. working memory). Regardless, they are not “general”, and that is why I prefer to use “human” or “super-human” instead of “general” (and a lot of people discussing AI do this as well). This is more accurate to our true intended meaning: an intelligence that contains all human intelligence, and can therefore do anything a human can do (labor).

    This is true, i just wanted to highlight it because it reminds me of how (neo)rationalists think, they happen to be the biggest supporters of AI.

    Yup, unfortunately for them, they do not have Marxism 😁. Marxism’s understanding of the fact that obtaining knowledge requires doing practice makes it uniquely suited for understanding AI.


  • I did not insult blue collar workers. I said that human intelligence is overkill for manual labor, and is only used due to a lack of an alternative.

    You claim that the position you oppose is “rambling nonsense”, and yet you are unwilling or unable to make a legitimate critique. Instead of investigating or staying silent, you and others knowingly straw-manned my position and distorted it, intending to portray me as a conservative who thinks that workers who do manual labor are unintelligent. Why? Because you have a tribalist position on AI and felt the need to attack an opponent that was unaligned with you, and you prioritized this urge over the correctness of yourself, me and other participants. This is not “pretty nice” behavior, this is liberal behavior, and as a communist, I am expected to call it out and attack it.

    To indulge in personal attacks, pick quarrels, vent personal spite or seek revenge instead of entering into an argument and struggling against incorrect views for the sake of unity or progress or getting the work done properly. This is a fifth type.

    To be aware of one’s own mistakes and yet make no attempt to correct them, taking a liberal attitude towards oneself. This is an eleventh type.



  • With regards to the harm that may come from automation, there are two positions (for and against) you could take with regards to real automation:

    1. Automation is useful, and good, but we should also prepare for the negative consequences, and try to mitigate them.
    2. Automation has significant negative consequences that exceed the benefits, and is therefore harmful.

    If the position is “While AI is definitely useful, we should be wary of slop effects, such as harm to the education system, and the harm that AI spam could cause to the internet”, then that is acceptable.

    If the position is “AI needs to be stopped, is bad because it stops struggle (???), and does more harm than good”, then that is a toxic and luddite position. How can it be fascist? By appealing to tradition, by appealing to nature, and by appealing to capitalist, puritanical and protestant work ethics.

    The overarching goal of my text was mentioned at the end: to bring materialism, realism, and logic back into the left-wing discussion of AI. Could I mention the negative effects of AI such as slop? Yes. But the left is quite aware of those positions already. It is NOT aware, even slightly, of how to view AI as automation; this comment section is proof. There were some discussions in my post about right-wing and capitalist delusions, such as singularity and near-future human intelligence, but this was to bound the acceptable range of positions: people should not believe in some magical singularity, nor should they think that AI is completely and utterly useless, as many people on the left genuinely, truly think. I also discussed some aspects of the capitalist and right-wing influences on the direction of AI: how hatred of artists and workers cause capitalists to try to “skip ahead” beyond what is possible with current technology, such as by trying to create entire pictures and movies from scratch (natural language instructions), with no artist involved.

    So obviously, of the two positions that I mentioned, I think you are taking the second one. Why? Lets look at some of your statements.

    AI will destroy critical thought and interpersonal relationships, like the internet destroyed the community and the aircraft destroyed the family.

    Is that the sum consequence of the internet? To “destroy the community”? Not instant communication all around the world, the ability to transfer large amounts of data such as images and video, the ability to create software that automates manual processes? Should we retvrn to the telephone?

    The aircraft destroyed the family? Is that more important than the ability to travel across the world, across oceans, at hundreds of miles per hour? The ability to quickly transport products and people across the world in a matter of days or hours instead of weeks or months by ship?

    You may say that you know that it was positive overall, and are just highlighting some negative consequences, but you do not talk like you know this. If someone who is anti-AI says that AI is bad because it is similar to the internet and airplanes this is not someone who is serious. Maybe it is bad wording, or it could be a non-dialectical-materialist worldview and approach.

    You narrowed down the criticism of aircraft in general to criticism of autopilot in particular. Then you used an example of capitalist greed and short-termism, where airlines and others do not give pilots sufficient chances to practice their skills, to criticize autopilot itself. That is not the fault of autopilot, that is the fault of capitalism. It is not the technology, it is its misuse and application outside of its viable range.

    Wanting to restrict AI given it’s purported trajectory in capability is more akin to wanting denuclearization than advocating for the cessation of all things post industrial.

    Should we also abandon nuclear power, the one truly viable source of green energy in the world? They are the same technology. The ability to make nuclear weapons inherently becomes much stronger when building the ability to make nuclear power plants. They can not be separated. And again, nuclear weapons are a misuse of nuclear technology, which is the fault of capitalists, not the fault of the technology itself. It can be used to generate an absurd amount of power for production, or it can be used to generate an absurd amount of power for destruction.

    Struggle is in all facets of life, not just what’s physical and aesthetic (which I imagine is what fascists are most concerned with).

    Did we not evolve in the face of struggle? What of our species if we take struggle away? Are we to become the people depicted in WALL-E?

    Struggle and achievements involved in platonic and romantic relationships, in personal fulfilment, that may be human. Struggle in a factory, doing the same thing all day just to make ends meet? No.

    I have saved sentiments that I feel are the most absurd, deranged, fascist, and luddite-related for last:

    Let’s say the world’s AI is ultra intelligent, costs 1 watt an hour to run, and is subserviently friendly- and while we’re at it, why not say that the world has passed universal basic income and the lack of jobs are no longer a concern. What’s the point?

    the world has passed universal basic income and the lack of jobs are no longer a concern. What’s the point?

    THE WORLD HAS PASSED UNIVERSAL BASIC INCOME AND THE LACK OF JOBS ARE NO LONGER A CONCERN. WHAT’S THE POINT?

    What is the point?

    What is the point???

    Huh?

    “We have achieved luxury communism, where everyone’s needs are met, but at what cost???”

    What you are describing IS the point.

    Pointing out what has been studied and well documented (the negative affects of the internet / similar technologies) is not primitivism.

    Was that the thing that you were “pointing out”? That not needing to work is bad? What is the thing that “has been studied”? Appeal to nature? Traditionalism? Idealism? American Protestant work ethic? Is that the thing we should study?

    Not a fascist, not an ancom?

    Lets repeat a few times.

    THE WORLD HAS PASSED UNIVERSAL BASIC INCOME AND THE LACK OF JOBS ARE NO LONGER A CONCERN. WHAT’S THE POINT?

    No fascism here?

    THE WORLD HAS PASSED UNIVERSAL BASIC INCOME AND THE LACK OF JOBS ARE NO LONGER A CONCERN. WHAT’S THE POINT?

    No primitivism here?

    We already have an overabundance of production in the world, those bereft of the things they need are in that position because in our overindulgence we have decided to take from them as well.

    My bad, I didn’t realize.

    We already have an overabundance of production in the world

    Considering that this is partially a logical flaw, not just simplistic idealism, I will say this: Proving to you that humanity is not finished building productive forces is out of scope. It would be a post 10x longer than the one I made here honestly, and I’m not even sure I have the skill to make something of that quality. But we are not finished building productive forces. We may have improved the productivity in agriculture, which meets our primary need to eat and drink, but we have also improved infant mortality, and nearly tripled life expectancy from the natural length of below 30 years to the current point of 70 years. We have improved quality of life in all areas, from medical, to comfort and shelter, to communication, etc. This takes far more work than just agriculture! And we are still not finished. Even if every country became socialist tomorrow, we would not be able to create a 5-hour work week. Unless we completely abandon all progress, let society decay, abandon the geoengineering needed to correct climate change, etc., we will probably not be able to reduce the worldwide work-week below 20-30 hours. I do not know your location, but the imperial core extracts a large amount of wealth. It may seem like work in the imperial core is unnecessary, but this is due to excess work in the global south and exploited countries.

    We already have an overabundance of production in the world

    …But again, this is also simplistic idealism as well. Production is good, and we do not have an “overabundance” of it.

    Switch to position 1, abandon position 2. EMBRACE MATERIALISM. EMBRACE TRUE REALISM.

    Let me ask anyone reading this one question; Is the theory you consume the backbone of actual substantive practice from you as the authors intended, or is the consumption of theory just a hobby and it’s discussion a way for you to feel a part of a community?

    Lol.

    To be fair, it is more the other people in the comment section than it is you. Some are refusing to justify criticism because of AI, some are practicing tribalism by taking false offense and straw manning my arguments as “manual laborers are not intelligent” because they don’t like my side, and some are taking nonchalant, “above-it” attitudes and telling me to read very verbose texts from Marx with no explanations (luckily I read the Marx text already haha). You aren’t really doing as much of that type of stuff, and we mostly just disagree. But my text is directly intended to fight this behavior, so I strongly disagree with the implication here that I am not the one following this. We should not blindly take a side in AI based on popularity or gut feeling, we should base our positions on materialism and logic.


  • This community is low quality to say the least, so I am not really interested in continuing to discuss this topic. I will try to make this short.

    What’s the point?

    Automation and the increase of productivity. That is the point. That is the point of the industrial revolution, that is the point of the steam engine, that is the point of electricity, of motors and robotics, of semiconductors and computers, of the internet. Primitivism and idealism is reactionary. If AI is used in a harmful, non-productive way, it is an incorrect usage. If AI is used in a helpful way, it is a correct usage. If AI is used in a productive but harmful way, such as making workers unemployed, the problem is the capitalist economic system, not AI, and socialism is the solution.

    What of our species if we take struggle away?

    To put it bluntly, this is fascist sentiment. The goal of production and progress is to meet the needs of people, and make their lives better. We do not exist to struggle, we struggle to build a world where this will not be necessary.

    AI will destroy critical thought and interpersonal relationships, like the internet destroyed the community and the aircraft destroyed the family.

    Is this not GenZedong? Am I in some ancom debate subreddit? I genuinely think I might be losing my grip on reality LOOOL

    Do not worry about reading this text, read Lenin and Stalin instead.


  • So first of all, that source and my own text are talking about two different things. That source is discussing the economic significance of human-level AI (a “machine which possesses skill and strength”), whereas I am disputing the fact that this will exist in the near future. Marx is discussing the outcome, and not the timing, whereas I am discussing the timing, and not the outcome.

    Second, that source directly contradicts your claim that intelligence is irrelevant. In fact, it is almost entirely about the significance of intelligence, and makes nearly the exact same argument as my text briefly made, except without the discussion of imperialism and timing:

    The appropriation of living labour by objectified labour… is posited… as the character of the production process itself. The production process has ceased to be a labour process in the sense of a process dominated by labour as its governing unity.

    Compare with my own text:

    If AI replaces human intelligence, labor will be automated, it will automatically scale with natural resource inputs, of which many such as electricity are effectively infinite in the short-term, and the concept of “value” will break down

    How about below-human AI? Your source:

    As long as the means of labour remains a means of labour in the proper sense of the term, such as it is directly, historically, adopted by capital and included in its realization process, it undergoes a merely formal modification, by appearing now as a means of labour not only in regard to its material side, but also at the same time as a particular mode of the presence of capital, determined by its total process – as fixed capital.

    My own text:

    Human level intelligence is unlikely. What is more likely in the near future is the elimination of a significant portion of white collar and manual labor, meaning >40% of jobs, but not 100% of jobs. In other words, it is effectively automation, and traditional Marxist economics already mostly explain the economic effects.

    We are not in a propaganda environment; this is a discussion among Marxists. If you are unable to articulate an argument, you should force the articulation until it is coherent. If you are not confident in your positions, you should openly state them and seek out the strongest criticisms until your confidence is strong. You should not vaguely appeal to authority instead of articulating, nor should you hide your positions.


  • They don’t really create a 3D spatial model of the environment. Then: From a 2D input they are just adding higher contexts to that 2D model. E.g. “the cat is in front of the TV” is a relationship mapping between 2 recognised objects. Or the cat is tiny in the image hence probably further from the camera.

    What you are describing is a model. It is much worse than a human’s model, but it still exists. As you said, “There are limits to what AI is good for”, and if used improperly, as many AI companies do, it will not work out. But if the crappy model is sufficient for the task, it is acceptable. I do not think Gemini 2.5 Pro or GPT5 are sufficient, but they are not far off in terms of intelligence (VERY far off in terms of context length though).

    Without this human feedback, it isn’t “learning” anything.

    This is ideal, because humans are a much better source of intelligence. Without this, learning would be much more difficult, and more comparable to evolution.

    From there the AI is trained based on giving shitty summaries to a tonne of 3rd world outsourcing data entry employees

    This may not be important to your position, but it is mostly incorrect. Post-training work like this is focused on alignment, and does not really add intelligence on its own. The majority of the intelligence comes from self-supervised pretraining by predicting the next word in a text. Outside of things like GRPO, post-training is mainly focused on alignment. I have done that work myself, mainly for software engineering and tool-calling. Most of it is actually going to make the model stupider rather than smarter due to the over constraining nature of post-training. I think it doesn’t change your position though.


  • It’s your life in the end, but this is just a fundamentally non-Marxist position. Our economic theory is literally called LABOR theory of value. How can you not care about what human “labor” actually is, what makes it different from animals, what makes it different from robots? Intelligence is the difference between them, and the concept of labor cannot be understood without this.

    You may not care about what intelligence is and how it is obtained, but Marxists care a lot. Our fundamental philosophical foundations in dialectical materialism and the theory-practice cycle are all about the source of knowledge and how to obtain and improve it. There is an extremely large amount of work from Marxist leadership, such as Marx, Lenin, Stalin, and Mao focusing on this exact topic.


  • I will admit that my 40% claims are probably overblown. “Manual labor” categorizations probably do not only include the main path in well-defined roles; they probably also include specialized roles, roles that are less defined, and they also probably include some trades, even accidentally.

    But there are a large number of jobs in logistics that are quite literally moving items from one location to another. For sortation, this is literal boxes (packages). For picking in a warehouse, it is individual items with varying shapes, sizes, strengths, etc., so there is more intelligence involved, and without a generalized AI solution, a more complex grasping algorithm is required. Are there some specialized roles, such as problem-solving, cleaning, etc? Yes, but the vast majority of workers are not doing those roles, and are expected to hand off those tasks to others to avoid distractions. For example, informing the managers of a spill, informing problem-solvers of missing labels or items, etc.

    Instead of having an AI solve all of these problems, it can delegate the same way, meaning that it only needs to identify a spill, missing label, etc., rather than fixing it. When the AI fails to delegate, a worker monitoring the AI can eventually correct the situation.

    We are both using anecdotal evidence, me of logistics work at places similar to Amazon, UPS, etc., and you at what potentially could be a much more skilled role. I will give more information about my own anecdotal evidence.

    Here is an example of packages being moved between pallets and a conveyor belt. I never worked at a facility big enough to use these, but they are absolutely used in real life.

    Here is a pretty well-known example of robots at Amazon handling the “movement” that I was talking about: the associate no longer needs to walk around the warehouse, and can simply focus on the picking (which is the more difficult “grasping” problem that the robots are not good at).

    Notice that in both those examples, it is being completely done without large language models. This does not require high intelligence. And think about it: if we can already do this without the AI that is being discussed, is that a positive view of this AI? No. Really my post is extremely pessimistic. I pointed out that the true limiter is the robots themselves, which is visible from the fact that we already have ways of doing this job without AI, but are still not doing it. It does not matter if we have AI when the robot itself costs 200k.

    Especially when combined with software engineering and other white collar work, this means that AI could be very valuable, but will not turn the world on its head. It is simply another form of automation, and while it is a big jump, it is not too different from previous jumps. Aside from bad timing and misallocation, that is a big reason why AI investment in America is a bubble.

    I also want to point out this:

    manual labor does in fact require thinking

    It requires far more thinking than any of our artificial intelligence, and requires far more alignment than animals in almost all cases (except for some cases such as farm animals, and these monkeys lol). But does it truly require human-level intelligence? The same human intelligence that discovers quantum physics, goes to outer-space, builds the internet and computers? No. The reason we consider it to be a human task is because there is nothing beneath humans that is suitable. Human intelligence, while it has some benefits, is completely overkill for a lot of these roles. At the same time, AI and algorithmic intelligence is way too low for the role, so we are forced to use humans anyway.

    Even if you or others personally prefer this work, and it brings challenge and requires learning new things, it is still overkill for the role. And if you work at a more skilled role, then maybe AI would not be suitable, but it could be suitable in many other areas.


  • It seemed more like a rhetorical question tbh. I thought you were either joking or insulting me, so I didn’t answer. You probably are, but I’ll give you an answer anyway.

    AI is nearly useless with Marxism. It is only able to provide basic definitions and sometimes very narrow applications in well-understood contexts, usually contexts that were discussed by famous Marxists in the training data. If it wasn’t useless, I’d use it to filter recent Marxist texts to find high quality ones, I’d translate Chinese texts to English so that I could read them, etc., but I do not trust it to reliably do this.

    Anyway, because of that, for writing I never really use it beyond proofreading/verification; I’ve probably only taken text from it (like individual sentences and rewordings) like less than 5 times ever for Marxism. Any Marxism it outputs is just a bunch of cargo-cult dramatic prose that doesn’t really say anything in the end. In the case of this text, it repeatedly says that a weakness of the text is the overdependence on “practice”, meaning that it doesn’t truly understand Mao’s On Practice, the theory-practice cycle, or dialectical materialism in general. And it won’t dare say anything positive about China in most cases lol. That tiny comparison of America and China makes it say that the text is idolizing China.

    The wild people in the replies are complaining about the fact that there are multiple conclusions and sections. I’d call them delusional (and honestly I think it flows clearly if you actually read it) but there are actual reasons for this besides neurodivergence. It was originally in different formats:

    • It started off as a Q&A-style format where I corrected common leftist mistakes relating to AI, similar to the 3rd miscellaneous section. It was supposed to be a short post, but got very long, so I ended up discussing the majority of it in an intelligence section and the economics section, basically making it a real text. The remaining stuff was either unrelated, or would have diluted those sections, so I kept the remaining stuff in a 3rd miscellaneous section, and summarized the whole text with the same original goal: bring materialism into discussions of AI.
    • The intelligence section was originally two parts, with the second part being about AI technical capabilities. This section was way too detailed and technical, explaining each capability: vision, speed, price, short-term memory vs long-term memory vs working memory and how they compare to humans for each one, etc.
    • The American bubble part was originally a part of this text meant to be combined with the parts about venture capitalism, monopoly, and reserve currency, but was excluded due to being too detailed and too irrelevant. I also meant to discuss it in another propaganda text about the international reserve currency and US collapse but I abandoned that text entirely.

    Basically it was massively shortened, and was not a single post originally.

    Since it was a combination of a bunch of different stuff that was reorganized multiple times, I did use AI to find where the flow “breaks” (sudden jumps, cut-off sentences from copy pasting, etc). But as I said, AI is awful at actually writing about Marxism. Although I tried to get assistance from it, it did not suggest any of those rewrite strategies I mentioned, because it doesn’t comprehend how the topics are related.

    But afterward, I asked for help with the thesis. It was successful there, so there is 1 sentence in the text that actually was modified from AI. I will show you the process for transparency.

    Here is the original:

    So far, this text has been very optimistic about the value of AI compared to typical left-wing views. AI does have some intelligence and will soon have a large amount of real-world automation value. This does not mean that AI is not a bubble in America. Even if AI will be worth hundreds of billions of dollars to the economy (which I think is almost guaranteed), American investors have invested more than this, and have done so in a very wasteful way. There is a major limitation to the value of AI: real-life physical input costs. American AI investment is also extremely wasteful because it invests too early and because it aggressively redirects resources from other industries.

    I tried to get AI to improve on this, and it gave me this:

    While the previous sections established AI’s significant practical potential, this value must not be confused with its current financial valuation in the United States. The American approach to AI is a classic speculative bubble, characterized by investment that far exceeds realistic economic returns. This bubble is driven by two core fallacies: a disregard for the hard physical limitations of automation—such as input costs and resource availability—and a profoundly wasteful strategy of premature investment that aggressively redirects capital and energy from other vital sectors of the economy.

    THIS is how AI writes. It sees “Marxism” and starts writing super dramatically. Was it mostly logically correct? Yes, but that is because I provided it an almost-finished version of the text from the post, and it didn’t need to add anything (I wanted it to write thesis-like statements for the transition to the next chapter). If I didn’t provide it the whole text, it would have obviously made something much worse, because its comprehension is worse than its writing abilities. This is one of the best outputs I’ve gotten from AI for Marxism, and its still mediocre, even if it managed to not say anything incorrect.

    But the last sentence is salvageable there. So I took it and made this:

    This waste manifests as both a premature investment in rapidly depreciating technology and a damaging diversion of essential resources, like energy and skilled labor, from other economic sectors

    As you can see, I shortened the sentence a bit, and removed the surrounding/previous parts, but kept a lot of the wording from that sentence. So to answer your question, that sentence’s wording was significantly assisted by AI. The rest was not.

    As for your concerns:

    I think it’s fair to expect an author to have spent at least as much time writing as they expect a reader to spend reading.

    I started recognizing that AI was able to generalize faster before GPT3 came out, probably 7 years ago or so. When combined with strong synthetic data strategies, large pretrained vision networks were capable of learning to classify an image from only a few examples (it used to take hundreds or thousands). Later on, similarity learning methods (for example, triplet loss) were capable of identifying objects from a single example without additional training, although with mediocre accuracy. This shows that when pretrained on a large amount of general data, ai models are able to generalize more quickly to more narrow datasets, and at much higher quality than if they trained on that dataset alone.

    I’ll spare you the details, but I started comparing evolution to machine learning once I saw GPT3. And I was a lot more extreme about it, thinking that it would take hundreds of years to catch up at minimum. I thought that the only way to “bypass” evolution would be to train on brain scan/MRI-like data, or similar (basically cloning the brain or something close). In reality, human language is effectively distilled data/embeddings of human knowledge, so it is very effective for bypassing evolution, at least partially.

    Last year, I was looking at AI coding approaches, and realized that the traditional conversation format is not ideal. It is effectively a whiteboard interview: the agent can not use IDE features such as syntax checking, it can not run tests, etc. I started making an AI agent, thinking that if I gave the AI the ability to run tests and experiment the way a human would, it would perform much better. Instead of a human pointing out the mistakes, the AI could find the mistakes itself, and it would dramatically improve the final result. But performance was awful. I realized that AI quality drops dramatically if the context length exceeds 10k. This matches with other people’s experience online.

    I started the portions of this text late last year (for the other texts I mentioned), and actually started this specific text a few months ago. I abandoned and came back to it repeatedly, and then finally came back and finished it when Karpathy started talking about the AI bubble a few weeks ago, as his arguments matched mine (evolution).

    So overall this post was from 7 years or so of observing and working with AI, took a few months to make, and probably took ~100 hours or so of actual direct writing.

    To be honest, the complete unwillingness to read is nothing new to leftists whatsoever. People are like “its so long it has to be AI”, “its badly written it has to be AI” like are y’all actually Marxists LOL? The group of people known for writing an absolute fuckton about everything and getting into detailed arguments about minor events from 100 years ago? Hello?

    Refusing to read is nothing new. People blame AI now, saying that they don’t know if what they are reading is just slop because it is so easily made. But in my experience, it is usually possible to tell if something is AI within the first 10 seconds of reading it, regardless of length, ESPECIALLY if you know the subject area. It really seems like the same old cope I saw in online discussions 5+ years ago. But I deal with AI labeling as a job, so I am probably better at detecting it I guess.

    Overall, this comment section makes me wonder if AI is better at Marxism than most human Marxists. Years ago, I noticed that like 5% of self-described Marxists actually understood dialectical materialism, the rest just blindly believed it. I see that that hasn’t changed.



  • One is mostly talking possible potential, the other about currently existing systems.

    I criticize all of those positions however, in the present and the future.

    Knowledge comes from real world practice and experience, whether it is direct (real-world experimentation) or indirect (human language or some other form).

    In the short-term, AI is limited by human language, as that allows bypassing evolution and is the most efficient approach. In the long-term, AI is limited by real-world resources and production. In both cases, this invalidates the typical idea of singularity, where AI becomes super-intelligent in days, weeks, months, or years.

    For “talking parrot” views, they are delusional, and by that I mean I honestly can’t convince people otherwise, only exposure can. They are pretty common among the left too, as we can see in this comment section.

    Thinking of one of the simplest tasks for AI, summarization, how could it possibly do summarization without comprehending the text and its main points to at least to a small degree? How could it form the sentences, identify what is unnecessary, or do anything? Are we going to pretend that it is simply counting words by frequency, or generating tags? It is delusional. Sure it messes up at times, but to pretend that it is equivalent to random chance or a simple algorithm is insane, regardless of how many people believe it.

    In terms of a more recent capability, if we have a series of images as a camera moves through an environment (for example, a building or city), SOTA vision language models such as Gemini 2.5 Pro and GPT 5 are capable of comprehending that we are moving through an environment and are capable of providing instructions for moving to a location from the current position. This means that they can perceive the world, have spatial memory of this world, explain it in human language, and know how to navigate this world. Are we going to pretend that this is just an implementation of SLAM and not a (low) level of intelligence?

    Considering that most warehouse logistics jobs are effectively “move this object from its current location to a different location”, AI is already not far off from this in terms of intelligence: it is able to remember an environment and how to navigate it, and it is able to identify objects. Can it clean up a mess, re-wrap a package, or similar? Not necessarily, but even in warehouses those tasks are delegated off to specialized roles with much lower manpower. I’m not really saying that AI have a lot of intelligence, really I’m saying that humans are overkill for the role. In the case of warehouses, there are already manually-programmed robots doing close to this without AI, so it really isn’t that hard. The biggest limitation is not the intelligence itself, but the context length: if you give it too many images, too many instructions, or a task that takes too many steps, it will fall apart. With a real-world task, that will happen very quickly.

    I used to believe that AI intelligence was fake, and I only changed my mind because I do data labeling as my job and also use it for software engineering. Since I use it constantly, I know that it is both unbelievably stupid and also has non-zero intelligence, and that the ideas of human intelligence in the near future, superintelligence in the next 100 years, and zero intelligence today are wrong.