• 0 Posts
  • 23 Comments
Joined 5 months ago
cake
Cake day: November 30th, 2024

help-circle

  • Many worlds theories are rather strange.

    If you take quantum theory at face value without trying to modifying it in any way, then you unequivocally run into the conclusion that ψ is contextual, that is to say, what ψ you assign to a system depends upon your measurement context, your “perspective” so to speak.

    This is where the “Wigner’s friend paradox” arises. It’s not really a “paradox” as it really just shows ψ is contextual. If Wigner and his friend place a particle in a superposition of states, his friend says he will measure it, and then Wigner steps out of the room for a moment when he is measuring it, from the friend’s perspective he would reduce ψ to an eigenstate, whereas in Wigner’s perspective ψ would instead remain in a superposition of states but one entangled with the measuring device.

    This isn’t really a contradiction because in density matrix form Wigner can apply a perspective transformation and confirm that his friend would indeed perceive an eigenstate with certain probabilities for which one they would perceive given by the Born rule, but it does illustrate the contextual nature of quantum theory.

    If you just stop there, you inevitably fall into relational quantum mechanics. Relational quantum mechanics just accepts the contextual nature of ψ and tries to make sense of it within the mathematics itself. Most other “interpretations” really aren’t even interpretations but sort of try to run away from the conclusion, such as significantly modifying the mathematics and even statistical predictions in order to introduce objective collapse or hidden variables in order to either get rid of a contextual ψ or get rid of ψ as something fundamental altogether.

    Many Worlds is still technically along these lines because it does add new mathematics explicitly for the purpose of avoiding the conclusion of irreducible contextuality, although it is the most subtle modification and still reproduces the same statistical predictions. If we go back to the Wigner’s friend scenario, Wigner’s friend reduced ψ relative to his own context, but Wigner, who was isolated from the friend and the particle, did not reduce ψ by instead described them as entangled.

    So, any time you measure something, you can imagine introducing a third-party that isn’t physically interacting with you or the system, and from that third party’s perspective you would be in an entangled superposition of states. But what about the physical status of the third party themselves? You could introduce a fourth party that would see the system and the third party in an entangled superposition of states. But what about the fourth party? You could introduce a fifth party… so on and so forth.

    You have an infinite regress until, at some how (somehow), you end up with Ψ, which is a sort of “view from nowhere,” a perspective that contains every physical object, is isolated from all those physical objects, and is itself not a physical object, so it can contain everything. So from the perspective of this big Ψ, everything always remains in a superposition of states forever, and all the little ψ are only contextual because they are like perspectival slices within Ψ.

    You cannot derive Ψ mathematically because there is no way to get from inherently contextual ψ to this preferred nonphysical perspective Ψ, so you cannot know its mathematical properties. There is also no way to define it, because each ψ is an element of Hilbert space and Hilbert space is a constructed space, unlike background spaces like Minkowski space. The latter are defined independently of the objects the contain, whereas the former are defined in terms of the objects they contain. That means for two different physical systems, you will have two different ψ that will be assigned to two different Hilbert spaces. The issue is that you cannot define the Hilbert space that Ψ is part of because it would require knowing everything in the universe.

    Hence, Ψ cannot be derived nor defined, so it can only be vaguely postulated, and its mathematical properties also have to be postulated as you cannot derive them from anything. It is just postulated to be this privileged cosmic perspective, a sort of godlike ethereal “view from nowhere,” and then it is postulated to have the same mathematical properties as ψ but that all ψ are also postulated to be subsystems of Ψ. You can then write things down like how a partial trace on Ψ can give you information about any perspective of its subsystems, but only because it was defined to have those properties. It is true by definition.

    In a RQM perspective it just takes quantum theory at face value without bothering to introduce a Ψ and just accepts that ψ is contextual. Talking about a non-contextual (absolute) ψ makes about as much sense as talking about non-contextual (absolute) velocity, and talking about a privileged perspective in QM makes about as much sense as talking about a privileged perspective in special relativity. For some reason, people are perfectly happy with accepting the contextual nature of special relativity, but they struggle real hard with the contextual nature of quantum theory, and feel the need to modify it, to the point of convincing themselves that there is a multiverse in order to escape it.


  • The development process of capitalism does not so much as produce “centralisation” (which is ill defined tbh) but socialisation (the conversion of individual labor to group labor), urbanisation and standardisation.

    This is just being a pedant. Just about every Marxist author uses the two interchangeably. We are talking about the whole economy coming under a single common enterprise that operates according to a common plan, and the process of centralization/socialization/consolidation/etc is the gradual transition from scattered and isolated enterprises to larger and larger consolidated enterprises, from small producers to big oligopolies to eventually monopolies.

    Furthermore, while it is true that socialist society develops out of capitalist society, revolutions are by definition a breaking point in the mode of production which makes the insistence that socialist societies must be highly centralised backwards logic.

    Marxism is not about completely destroying the old society and building a new one from the void left behind. Humans do not have the “free will” to build any kind of society they want. Marxists view the on-the-ground organization of production as determined by the forces of production themselves, not through politics or economic policies. When the feudal system was overthrown in French Revolution, it was not as if the French people just decided to then transition from total feudalism to total capitalism. Feudalism at that point basically didn’t even exist anymore, the industrial revolution had so drastically changed the conditions on the ground that it basically already capitalism and entirely disconnected from the feudal superstructure.

    Marx compared it to how when the firearm was invented, battle tactics had to change, because you could not use the same organizational structure with the invention of new tools. Engels once compared it to Darwinian evolution but for the social sciences, not because of the natural selection part, but the gradual change part. The political system is always implemented to reflect an already-existing way of producing things that arose on the ground of its own accord, but as the forces of production develop, the conditions on the ground very gradually change in subtle ways, and after hundreds of years, they will eventually become incredibly disconnected from the political superstructure, leading to instability.

    Marx’s argument for socialism is not a moralistic one, it is precisely that centralized production is incompatible with individual ownership, and that the development of the forces of production, very slowly but surely, replaces individual production with centralized production, destroying the foundations of capitalism in the process and developing towards a society that is entirely incompatible with the capitalist superstructure, leading to social and economic instability, with the only way out replacing individual appropriation with socialized appropriation through the expropriation of those enterprises.

    The foundations remain the same, the superstructure on top of those foundations change. The idea that the forces of production leads directly to centralization and that post-capitalist society doesn’t have to be centralized is straight-up anti-Marxist idealism. You are just not a Marxist, and that’s fine, if you are an anarchist just be an anarchist and say you are one and don’t try to misrepresent Marxian theory.

    We are starting from a dislike of anarchism’s dogma of decentralisation and just working backwards.

    Oh wow, all of Marxism is apparently just anarchist hate! Who knew! Marxism debunked! No, it’s because Marxists are just like you: they don’t believe the development of the past society lays the foundations for the future society, they are not historical materialists, but believe humanity has the free will to build whatever society they want, and so they want to destroy the old society completely rather than sublating it, and build a new society out of the ashes left behind. They dream of taking all the large centralized enterprises and “busting them up” so to speak.



  • pcalau12i@lemmygrad.mltoScience Memes@mander.xyzETERNAL TORMENT
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    13 days ago

    There are no “paradoxes” of quantum mechanics. QM is a perfectly internally consistent theory. Most so-called “paradoxes” are just caused by people not understanding it.

    QM is both probabilistic and, in its own and very unique way, relative. Probability on its own isn’t confusing, if the world was just fundamentally random you could still describe it in the language of classical probability theory and it wouldn’t be that difficult. If it was just relative, it can still be a bit of a mind-bender like special relativity with its own faux paradoxes (like the twin “paradox”) that people struggle with, but ultimately people digest it and move on.

    But QM is probabilistic and relative, and for most people this becomes very confusing, because it means a particle can take on a physical value in one perspective while not having taken on a physical value in another (called the relativity of facts in the literature), and not only that, but because it’s fundamentally random, if you apply a transformation to try to mathematically place yourself in another perspective, you don’t get definite values but only probabilistic ones, albeit not in a superposition of states.

    For example, the famous “Wigner’s friend paradox” claims there is a “paradox” because you can setup an experiment whereby Wigner’s friend would assign a particle a real physical value whereas Wigner would be unable to from his perspective and would have to assign an entangled superposition of states to both his friend and the particle taken together, which has no clear physical meaning.

    However, what the supposed “paradox” misses is that it’s not paradoxical at all, it’s just relative. Wigner can apply a transformation in Hilbert space to compute the perspective of his friend, and what he would get out of that is a description of the particle that is probabilistic but not in a superposition of states. It’s still random because nature is fundamentally random so he cannot predict what his friend would see with absolute certainty, but he can predict it probabilistically, and since this probability is not a superposition of states, what’s called a maximally mixed state, this is basically a classical probability distribution.

    But you only get those classical distributions after applying the transformation to the correct perspective where such a distribution is to be found, i.e. what the mathematics of the theory literally implies is that only under some perspectives (defined in terms of any physical system at all, kind of like a frame of reference, nothing to do with human observers) are the physical properties of the system actually realized, while under some other perspectives, the properties just aren’t physically there.

    The Schrodinger’s cat “paradox” is another example of a faux paradox. People repeat it as if it is meant to explain how “weird” QM is, but when Schrodinger put it forward in his paper “The Present Situation in Quantum Mechanics,” he was using it to mock the idea of particles literally being in two states at once, by pointing out that if you believe this, then a chain reaction caused by that particle would force you to conclude cats can be in two states at once, which, to him, was obviously silly.

    If the properties of particles only exist in some perspectives and aren’t absolute, then a particle can’t meaningfully have “individuality,” that is to say, you can’t define it in complete isolation. In his book “Science and Humanism,” Schrodinger talks about how, in classical theory, we like to imagine particles as having their own individual existence, moving around from interaction to interaction, carrying their properties with themselves at all times. But, as Schrodinger points out, you cannot actually empirically verify this.

    If you believe particles have continued existence in between interactions, this is only possible if the existence of their properties are not relative so they can be meaningfully considered to continue to exist even when entirely isolated. Yet, if they are isolated, then by definition, they are not interacting with anything, including a measuring device, so you can never actually empirically verify they have a kind of autonomous individual existence.

    Schrodinger pointed out that many of the paradoxes in QM carry over from this Newtonian way of thinking, that particles move through space with their own individual properties like billiard balls flying around. If this were to be the case, then it should be possible to assign a complete “history” to the particle, that is to say, what its individual properties are at all moments in time without any gaps, yet, as he points out in that book, any attempt to fill in the “gaps” leads to contradiction.

    One of these contradictions is the famous “delayed choice” paradox, whereby if you imagine what the particle is doing “in flight” when you change your measurement settings, you have to conclude the particle somehow went back in time to rewrite the past to change what it is doing. However, if we apply Schrodinger’s perspective, this is not a genuine “paradox” but just a flaw of actually interpreting the particle as having a Newtonian-style autonomous existence, of having “individuality” as he called it.

    He also points out in that book that when he originally developed the Schrodinger equation, the purpose was precisely to “fill in the gaps,” but he realized later that interpreting the evolution of the wave function according to the Schrodinger equation as a literal physical description of what’s going on is a mistake, because all you are doing is pushing the “gap” from those that exist between interactions in general to those that exist between measurement, and he saw no reason as to why “measurement” should play an important role in the theory.

    Given that it is possible to make all the same predictions without using the wave function (using a mathematical formalism called matrix mechanics), you don’t have to reify the wave function because it’s just a result of an arbitrarily chosen mathematical formalism, and so Schrodinger cautioned against reifying it, because it leads directly to the measurement problem.

    The EPR “paradox” is a metaphysical “paradox.” We know for certain QM is empirically local due to the no-communication theorem, which proves that no interaction a particle could undergo could ever cause an observable alteration on its entangled pair. Hence, if there is any nonlocality, it must be invisible to us, i.e. entirely metaphysical and not physical. The EPR paper reaches the “paradox” through a metaphysical criterion it states very clearly on the first page, which is to equate the ontology of a system to its eigenstates (to “certainty”). This makes it seem like the theory is nonlocal because entangled particles are not in eigenstates, but if you measure one, both are suddenly in eigenstates, which makes it seem like they both undergo an ontological transition simultaneously, transforming from not having a physical state to having one at the same time, regardless of distance.

    However, if particles only have properties relative to what they are physically interacting with, from that perspective, then ontology should be assigned to interaction, not to eigenstates. Indeed, assigning it to “certainty” as the EPR paper claims is a bit strange. If I flip a coin, even if I can predict the outcome with absolute certainty by knowing all of its initial conditions, that doesn’t mean the outcome actually already exists in physical reality. To exist in physical reality, the outcome must actually happen, i.e. the coin must actually land. Just because I can predict the particle’s state at a distance if I were to travel there and interact with it doesn’t mean it actually has a physical state from my perspective.

    I would recommend checking out this paper here which shows how a relative ontology avoids the “paradox” in EPR. I also wrote my own blog post here which if you go to the second half it shows some tables which walk through how the ontology differs between EPR and a relational ontology and how the former is clearly nonlocal while the latter is clearly local.

    Some people frame Bell’s theorem as a paradox that proves some sort of “nonlocality,” but if you understand the mathematics it’s clear that Bell’s theorem only implies nonlocality for hidden variable theories. QM isn’t a hidden variable theory. It’s only a difficulty that arises in alternative theories like pilot wave theory, which due to their nonlocal nature have to come up with a new theory of spacetime because they aren’t compatible with special relativity due to the speed of light limit. However, QM on its own, without hidden variables, is indeed compatible with special relativity, which forms the foundations of quantum field theory. This isn’t just my opinion, if you go read Bell’s own paper himself where he introduces the theorem, he is blatantly clear in the conclusion, in simple English language, that it only implies nonlocality for hidden variable theories, not for orthodox QM.

    Some “paradoxes” just are much more difficult to catch because they are misunderstandings of the mathematics which can get hairy at times. The famous Frauchiger–Renner “paradox” for example stems from incorrect reasoning across incompatible bases, a very subtle point lost in all the math. The Cheshire cat “paradox” tries to show particles can disassociate from their properties, but those properties only “disassociate” across different experiments, meaning in no singular experiment are they observed to dissociate.

    I ran out of charact-


  • This is a completely US/Euro-centric view of what artists are and it’s fucked up to say. We should not be celebrating more workers getting the short end of the stick, we should be showing them solidarity and showing them the way to organization.

    Are artists who work for themselves something that only occurs in the US and Europe? I guess I just live under a rock, genuinely did not know.

    Antagonizing them just because you think they are petite-bourgeois is completely counterproductive. Most artists are either just making ends meet

    I don’t know what “antagonizing” has to do with anything here, and if you work for yourself you are by definition petty-bourgeois. How successful you are at that isn’t relevant. The point is not about moralizing, I get the impression when you talk about “antagonizing” you are moralizing these terms and acting like “petty-bourgeoisie” is an insult. It’s not. Many members of the petty-bourgeoisie are genuinely good people just trying to make their way in the world. It’s not a moral category.

    I am talking about their material interests. A person who works for themselves isn’t as alienated from their labor as someone who works for a big company, and this leads them to also value property rights more because they have more control over what they produce and what is done with what they produce.

    or working for big companies like every other worker

    If you really are working for a big company where, like all regular workers, you don’t get much say in what you produce or any control over it in the first place, then yes, your position is more inline with a member of the proletariat already, but a person like that would also be more easy to appeal to. They wouldn’t have as much material interests in protecting intellectual property right laws because they are already alienated from what they produce.

    In my personal experience (I have no data on this so take it with a grain of salt), petty-bourgeois artists tend to be more difficult to appeal to because even in the cases where they have left-leaning tendencies, they tend to lean more towards things like anarchism where they believe they can still operate as a petty-bourgeois small producer. I remember one anarchist artist who even told me that they would still want community enforcement of copyright under an anarchist society because they were afraid of people copying their art.

    Maybe you are right and I am just sheltered and most artists outside of US and Europe work for big companies and the kind of “self-made” artist is more of a western-centric thing. But if that’s the case, you can consider the commentary to be more focused on the west, because it still is worth discussing even if it’s not universally applicable.

    This doesn’t mean they will suddenly develop class consciousness.

    Of course, people only develop at best union consciousness on their own. You are already seeing increased unionization and union activities from artists in response to AI. For class consciousness, people need to be educated.

    They were never a part of the bourgeoisie to begin with, and therefore our interests were already aligned.

    Many, at least here in the west where I live, are petty-bourgeois. Not all, but the “self-made” ones tend to be the most vocal against things like AI and they care the most about protecting things like copyright and IP law. If you’re working for a big company, the stuff you draw belongs to the company, and even if it didn’t, it would still have no utility to yourself because it’s designed specifically to be used in company materials, so not only do these property right laws not allow you to keep what you draw, but even if they were removed, you wouldn’t want to keep it, either, because it has no use to you.

    That is why the proletariat is more alienated from their labor, and why they have less material interests in trying to maintain these kinds of property right laws. Of course, that doesn’t mean a person of the petty-bourgeois class can’t be appealed to, but it is a bit harder. In the Manifesto, Marx and Engels argue they can be appealed to in the case where they view their ruination and transformation into a member of the proletariat as far more likely than ever succeeding and advancing to become a member of the bourgeois class.

    But Marx and Engels also argue that they are typically reactionary because they want to hold back the natural development of the productive forces, such as automation, precisely because it will lead to most of their ruination. This is the major problem with a lot of petty-bourgeois artists, they want to hold back automation in terms of AI because they are afraid it will hurl them into the proletariat. However, as automation continues to progress, eventually it will have gone so far it’s clear there is no going back and they will have to come to grips with this fact, and that’s when they proletariat can start appealing to them.

    It was the same thing that Engels recommended to the peasantry. The ruination of the peasantry, like the petty-bourgeoisie, is inevitable with the development of the forces of production, specifically with the development of new productive forces that massively automate and semi-automate many aspects of agriculture. So, the proletariat should never promise to the peasantry to preserve their way of life forever, but rather, they should only promise to the peasantry better conditions during this process of being transformed into members of the proletariat, i.e. Engels specifically argued that collectivizing the peasant farms would allow them to develop into farming enterprises in a way that saves the peasants from losing their farms, which the majority would under the normal course of development.

    Similarly, we should not promise to any petty bourgeois worker that we are going to hold back or even ban the development of the forces of production to preserve their way of life, but only that a socialist revolution would provide them better conditions in this transformation process. Yes, as you said, many of these artists are “just making ends meet,” and that’s the normal state of affairs. The petty-bourgeoisie are called petty for a reason, they are not your rich billionaires, most in general are struggling.

    As for petty-bourgeois artists, if we simply banned AI, their life would still be shit, because we would just be stopping the development of the productive forces to preserve their already shitty way of life. In a socialist state, however, they would be provided for much more adequately, and so even though they would have to work in a public enterprise and could no longer be a member of the petty-bourgeoisie, they would actually have a much higher and more stable quality of living than “just making ends meet.” They would have financial security and stability, and more access to education and free time to pursue artistry that isn’t tied to making a living.

    Marxists should not be in the business of trying to stall the progress of history to save non-proletarian classes, and the artists who work for big corporations who don’t own their art are already proletarianized, so the development of AI doesn’t change much for them.


  • The centralization of production is the material foundations for socialist society and the core of Marx’s historical materialist argument as to why capitalism is not an eternally sustainable system. If you abandon it then Marxism might as well be thrown in the trash because it would no longer have a materialist argument against capitalism. You could only mount a moralist argument at that point. Unless you are arguing that there is a different historical materialist argument possible that you could make that doesn’t rely on appealing to the laws of the centralization?


  • (1) Marxists are pro-centralization, not decentralization. We’re not anarchists/libertarians. This is good for us as it lays the foundations for socialist society, while also increasing the contradictions within capitalist society, bringing the socialist revolution closer to fruition.

    This centralist tendency of capitalistic development is one of the main bases of the future socialist system, because through the highest concentration of production and exchange, the ground is prepared for a socialized economy conducted on a world-wide scale according to a uniform plan. On the other hand, only through consolidating and centralizing both the state power and the working class as a militant force does it eventually become possible for the proletariat to grasp the state power in order to introduce the dictatorship of the proletariat, a socialist revolution.

    — Rosa Luxemburg, On the National Question

    Communist society is stateless. But if true - and most certainly it is - what really is the difference between anarchists and Marxist communists? Does this difference no longer exist, at least on the question of the future society and the “ultimate goal”? Of course it exists, but is altogether different. It can be briefly defined as the difference between large centralized production and small decentralized production. We communists on the other hand believe that the future society…is large-scale centralized, organized and planned production, tending towards the organization of the entire world economy…Future society will not be born of “nothing”, will not be delivered from the sky by a stork. It grows within the old world and the relationships created by the giant machinery of financial capital. It is clear that the future development of productive forces (any future society is only viable and possible if it develops the productive forces of the already outdated society) can only be achieved by continuing the tendency towards the centralization of the production process, and the improved organization of the “direction of things” replacing the former “direction of men”.

    — Nikolai Bukharin, Anarchy and Scientific Communism

    The advance of industry, whose involuntary promoter is the bourgeoisie, replaces the isolation of the labourers, due to competition, by the revolutionary combination, due to association. The development of Modern Industry, therefore, cuts from under its feet the very foundation on which the bourgeoisie produces and appropriates products. What the bourgeoisie therefore produces, above all, are its own grave-diggers.

    — Marx & Engels, Manifesto of the Communist Party

    (2) Much of your discussion just regards how AI is turning artists into an “extension of the machine” and further alienating their labor. But, like, that’s already true for most workers. Petty bourgeois artists will have to fall to the low, low place of the common working man… gasp! The reality is that it is good for us, because a lot of these petty bourgeois artists, precisely because they are “self-made” and not as alienated from their labor as regular workers, tend to have more positive views of property right laws. If more of them become “extensions of the machine” like every proles, then their interests will become more materially aligned with the proles. They would stop seeing art as a superior kind of labor that makes them better and more important than other workers, but would see themselves as equal with the working class and having interests aligned with them.

    (3) Your discussion regarding Deepseek is confusing. Yes, the point of AI is to improve productivity, but this is an objectively positive thing and the driving force of history that all Marxists should support. The whole point of revolution is that the previous system becomes a fetter on improving productivity. Whether or not Deepseek was created to improve productivity for capitalist or socialist reasons, either way, improving productivity is a positive thing. It is good to reduce labor costs.

    [I]t is only possible to achieve real liberation in the real world and by employing real means, that slavery cannot be abolished without the steam-engine and the mule and spinning-jenny, serfdom cannot be abolished without improved agriculture, and that, in general, people cannot be liberated as long as they are unable to obtain food and drink, housing and clothing in adequate quality and quantity. “Liberation” is an historical and not a mental act, and it is brought about by historical conditions, the development of industry, commerce, agriculture, the conditions of intercourse.

    — Marx, Critique of the German Ideology

    The proletariat will use its political supremacy to wrest, by degrees, all capital from the bourgeoisie, to centralise all instruments of production in the hands of the State, i.e., of the proletariat organised as the ruling class; and to increase the total of productive forces as rapidly as possible.

    — Marx & Engels, Manifesto of the Communist Party

    (4) Clearly, for the proletariat, we “full proletarianization of the arts” is by definition a good thing for the proletarian movement.



  • I will be the controversial one and say that I reject that “consciousness” even exists in the philosophical sense. Of course, things like intelligence, self-awareness, problem-solving capabilities, even emotions exist, but it’s possible to describe all of these things in purely functional terms, which would in turn be computable. When people like about “consciousness not being computable” they are talking about the Chalmerite definition of “consciousness” popular in philosophical circles specifically.

    This is really just a rehashing of Kant’s noumena-phenomena distinction, but with different language. The rehashing goes back to the famous “What is it like to be a bat?” paper by Thomas Nagel. Nagel argues that physical reality must be independent of point of view (non-contextual, non-relative, absolute), whereas what we perceive clearly depends upon point of view (contextual). You and I are not seeing the same thing for example, even if we look at the same object we will see different things from our different standpoints.

    Nagel thus concludes that what we perceive cannot be reality as it really is, but must be some sort of fabrication by the mammalian brain. It is not equivalent to reality as it is really is (which is said to be non-contextual) but must be something irreducible to the subject. What we perceive, therefore, he calls “subjective,” and since observation, perception and experience are all synonyms, he calls this “subjective experience.”

    Chalmers later in his paper “Facing up to the Hard Problem of Consciousness” renames this “subjective experience” to “consciousness.” He points out that if everything we perceive is “subjective” and created by the brain, then true reality must be independent of perception, i.e. no perception could ever reveal it, we can never observe it and it always lies beyond all possible observation. How does this entirely invisible reality which is completely disconnected from everything we experience, in certain arbitrary configurations, “give rise to” what we experience. This “explanatory gap” he calls the “hard problem of consciousness.”

    This is just a direct rehashing in different words Kant’s phenomena-noumena distinction, where the “phenomena” is the “appearance of” reality as it exists from different points of view, and the “noumena” is that which exists beyond all possible appearances, the “thing-in-itself” which, as the term implies, suggests it has absolute (non-contextual) properties as it can be meaningfully considered in complete isolation. Velocity, for example, is contextual, so objects don’t meaningfully have velocity in complete isolation; to say objects meaningfully exist in complete isolation is to thus make a claim that they have a non-contextual ontology. This leads to the same kind of “explanatory gap” between the two which was previously called the “mind-body problem.”

    The reason I reject Kantianism and its rehashing by the Chalmerites is because Nagel’s premise is entirely wrong. Physical reality is not non-contextual. There is no “thing-in-itself.” Physical reality is deeply contextual. The imagined non-contextual “godlike” perspective whereby everything can be conceived of as things-in-themselves in complete isolation is a fairy tale. In physical reality, the ontology of a thing can only be assigned to discrete events whereby its properties are always associated with a particular context, and, as shown in the famous Wigner’s friend thought experiment, the ontology of a system can change depending upon one’s point of view.

    This non-contextual physical reality from Nagel is just a fairy tale, and so his argument in the rest of his paper does not follow that what we observe (synonym for: experience, perceive) is “subjective,” and if Nagel fails to establish “subjective experience,” then Chalmers fails to establish “consciousness” which is just a renaming of this term, and thus Chalmers fails to demonstrate an “explanatory gap” between consciousness and reality because he has failed to establish that “consciousness” is a thing at all.

    What’s worse is that if you buy Chalmers’ and Nagel’s bad arguments then you basically end up equating observation as a whole with “consciousness,” and thus you run into the Penrose conclusion that it’s “non-computable.” Of course we cannot compute what we observe, because what we observe is not consciousness, it is just reality. And reality itself is not computable. The way in which reality evolves through time is computable, but reality as a whole just is. It’s not even a meaningful statement to speak of “computing” it, as if existence itself is subject to computation, but Chalmerite delusion tricks people like Penrose to think this reveals something profound about the human mind, when it’s not relevant to the human mind.



  • That’s more religion than pseudoscience. Pseudoscience tries to pretend to be science and tricks a lot of people into thinking it is legitimate science, whereas religion just makes proclamations and claims it must be wrong if any evidence debunks them. Pseudoscience is a lot more sneaky, and has become more prevalent in academia itself ever since people were infected by the disease of Popperism.

    Popperites believe something is “science” as long as it can in principle be falsified, so you invent a theory that could in principle be tested then you have proposed a scientific theory. So pseudoscientists come up with the most ridiculous nonsense ever based on literally nothing and then insist everyone must take it seriously because it could in theory be tested one day, but it is always just out of reach of actually being tested.

    Since it is testable and the brain disease of Popperism that has permeated academia leads people to be tricked by this sophistry, sometimes these pseudoscientists can even secure funding to test it, especially if they can get a big name in physics to endorse it. If it’s being tested at some institution somewhere, if there is at least a couple papers published of someone looking into it, it must be genuine science, right?

    Meanwhile, while they create this air of legitimacy, a smokescreen around their ideas, they then reach out to a laymen audience through publishing books, doing documentaries on television, or publishing videos to YouTube, talking about woo nuttery like how we’re all trapped inside a giant “cosmic consciousness” and we are all feel each other’s vibrations through quantum entanglement, and that somehow science proves the existence of gods.

    As they make immense dough off of the laymen audience they grift off of, if anyone points to the fact that their claims are based on nothing, they just can deflect to the smokescreen they created through academia.


  • Because you use a prompt in natural language to produce some stuff for you…? In this case a translation. There are already entire companies who sell entire books translated using AI and there’s a lot of them on Amazon. If “generative AI” were to refer to anything at all it seems strange you want it to exclude entire books generated by AI.

    If you want to be strict about natural language actually being complete and grammatically correct sentences like we’re talking here, then translation software is generative AI but some AI image generators like Stable Diffusion are not since they rely on you using a list of positive and negative tags and not sentences that you would speak. It would also mean that if I build an AI to send commands to a robot based on voice commands that would qualify as generative AI as well since it is producing the command output for me based on speech.


  • Generative AI is colloquially used to refer to AI which you prompt in natural language to produce some stuff for you. If you prompt some AI to make music or protein sequences for you then that is generative AI too. It is a loose term and not something that AI scholars agree upon but it is not meaningless.

    Again, you only proved my point as you gave me a definition that applies to things like OCR, translation software, and voice recognition, which people wouldn’t colloquially categorize as generative AI. You cannot provide a definition that gives the kind of carve-out you want because it doesn’t exist, and any attempt to do so only solidifies my point further. The carve-out is ultimately arbitrary, it is just an arbitrary list of AI people don’t like.


  • That didn’t address the point I was making, all AI is ultimately about generating outputs, so I am not sure where your line of “generative AI” actually begins and ends. The term is absolutely 100% meaningless if I have zero idea what even qualifies as “generative AI” and what doesn’t, because then you aren’t telling me anything, I don’t know what you’re saying you like and what you don’t like, and different people would probably have different ideas over what even counts as “generative AI.” I am saying the term is too ambiguous for me to even know what is being talked about and your response is “well it’s just a lot of things and dontcha know in the English language we use terms for a lot of things all the time.” Like… what??? How is that a response. An appropriate response to what I said would be to actually tell me something more concrete I could use to judge whether or not something counts as “generative AI” vs not.

    From my standpoint it really seems like “generative AI” is just a stand-in for “AI I don’t like.” People use it and arbitrarily lump in things they consider “slop factories” like image generators or ChatGPT, but when you point out plenty of other AI actually do have very practical usages in the science, some even also being LLMs or based on diffusion technologies, they will say “erm well I just dislike generative AI” even though again the technology is fundamentally the same and they are both generating content. The caveat is not really any more meaningful than just a placeholder for AI people think is bad.



  • A lot of computer algorithms are inspired by nature. Sometimes when we can’t figure out a problem, we look and see how nature solves it and that inspires new algorithms to solve those problems. One problem computer scientists struggled with for a long time is tasks that are very simple to humans but very complex for computers, such as simply converting spoken works into written text. Everyone’s voice is different, and even those same people may speak in different tones, they may have different background audio, different microphone quality, etc. There are so many variables that writing a giant program to account for them all with a bunch of IF/ELSE statements in computer code is just impossible.

    Computer scientists recognized that computers are very rigid logical machines that computer instructions serially like stepping through a logical proof, but brains are very decentralized and massively parallelized computers that process everything simulateously through a network of neurons, whereby its “programming” is determined by the strength of the neural connections between the neurons, that are analogue and not digital and only produce approximate solutions and aren’t as rigorous as a traditional computer.

    This led to the birth of the artificial neural network. This is a mathematical construct that describes a system with neurons and configurable strengths of all its neural connections, and from that mathematicians and computer scientists figured out ways that such a neural network could also be “trained,” i.e. to configure its neural pathways automatically to be able to “learn” new things. Since it is mathematical, it is hardware-independent. You could build dedicated hardware to implement it, a silicon brain if you will, but you could also simulate it on a traditional computer in software.

    Computer scientists quickly found that applying this construct to problems like speech recognition, they could supply the neural network tons of audio samples and their transcribed text and the neural network would automatically find patterns in it and generalize from it, and when new brand audio is recorded it could transcribe it on its own. Suddenly, problems that at first seemed unsolvable became very solvable, and it started to be implemented in many places, such as language translation software also is based on artificial neural networks.

    Recently, people have figured out this same technology can be used to produce digital images. You feed a neural network a huge dataset of images and associated tags that describe it, and it will learn to generalize patterns to associate the images and the tags. Depending upon how you train it, this can go both ways. There are img2txt models called vision models that can look at an image and tell you in written text what the image contains. There are also txt2img models which you can feed it a description of an image and it will generate and image based upon it.

    All the technology is ultimately the same between text-to-speech, voice recognition, translation software, vision models, image generators, LLMs (which are txt2txt), etc. They are all fundamentally doing the same thing, just taking a neural network with a large dataset of inputs and outputs and training the neural network so it generalizes patterns from it and thus can produce appropriate responses from brand new data.

    A common misconception about AI is that it has access to a giant database and the outputs it produces are just stitched together from that database, kind of like a collage. However, that’s not the case. The neural network is always trained with far more data that can only possibly hope to fit inside the neural network, so it is impossible for it to remember its entire training data (if it could, this would lead to a phenomena known as overfitting which would render it nonfunctional). What actually ends up “distilled” in the neural network is just a big file called the “weights” file which is a list of all the neural connections and their associated strengths.

    When the AI model is shipped, it is not shipped with the original dataset and it is impossible for it to reproduce the whole original dataset. All it can reproduce is what it “learned” during the training process.

    When the AI produces something, it first has an “input” layer of neurons kind of like sensory neurons, such as, that input may be the text prompt, may be image input, or something else. It then propagates that information through the network, and when it reaches the end, that end set of neurons are “output” layers of neurons which are kind of like motor neurons that are associated with some action, lot plotting a pixel with a particular color value, or writing a specific character.

    There is a feature called “temperature” that injects random noise into this “thinking” process, that way if you run the algorithm many times, you will get different results with the same prompt because its thinking is nondeterministic.

    Would we call this process of learning “theft”? I think it’s weird to say it is “theft,” personally, it is directly inspired by biological systems learn, of course with some differences to make it more suited to run on a computer but the very broad principle of neural computation is the same. I can look at a bunch of examples on the internet and learn to do something, such as look at a bunch of photos to use as reference to learn to draw. Am I “stealing” those photos when I then draw an original picture of my own? People who claim AI is “stealing” either don’t understand how the technology works or just reach to the moon claiming things like it doesn’t have a soul or whatever so it doesn’t count, or just pointing to differences between AI and humans which are indeed different but aren’t relevant differences.

    Of course, this only applies to companies that scrape data that really are just posted publicly so everyone can freely look at, like on Twitter or something. Some companies have been caught scraping data illegally that were never put anywhere publicly, like Meta who got in trouble for scraping libgen, which a lot of stuff on libgen is supposed to be behind a paywall. However, the law already protects people who get their paywalled data illegally scraped as Meta is being sued over this, so it’s already on the side of the content creator here.

    Even then, I still wouldn’t consider it “theft.” Theft is when you take something from someone which deprives them of using it. In that case it would be piracy, when you copy someone’s intellectual property for your own use without their permission, but ultimately it doesn’t deprive the original person of the use of it. At best you can say in some cases AI art, and AI technology in general, can based on piracy. But this is definitely not a universal statement. And personally I don’t even like IP laws so I’m not exactly the most anti-piracy person out there lol