DefinitelyNotAPhone [he/him]

  • 0 Posts
  • 26 Comments
Joined 4 years ago
cake
Cake day: July 29th, 2020

help-circle






  • Consciousness is not part of the observer effect (which is itself named in the most infuriating way possible, specifically because it makes people think that the universe is somehow aware of when something sentient is looking at it). “Observing” a particle requires interacting with it in such a way that you meaningfully affect its current state of being, whether that be deflecting it in a different direction than it was going or changing its velocity, and therefore it is impossible at a quantum level to be a passive observer that does not influence the outcome.

    In the case of the double slit experiment, if unobserved light will act as a wave with interference and if observed then it acts like a particle. The reason for this is both complicated and simple: light behaves as a wave due to probability. There’s no way of observing a photon without influencing it, so therefore the best we can do is say it has a certain probability of being in this collection of spaces, which in the case of photons is a wave (because it can travel in any of a number of directions outwards from the photon emitter in the experiment, but all going away from the emitter and towards the wall the slits are cut into). For the purposes of this probability wave, the start position is the emitter and the end position is the wall behind the slits, so averaging out a large number of photons will recreate the interference pattern on the wall.

    However, if you observe the photons at the slits to try and figure out which slits they’re going through you have influenced the photons and thus collapsed that probability wave into a particle, and in the process created a new probability wave from that moment onwards which has the same end position as the original wave, but now starts at the individual slit. From its perspective, there is no second slit, so now the wave acts as if it is in the single slit setup because from its perspective it is, hence the loss of interference.

    Nothing here has anything to do with consciousness. You can recreate this experiment with no one in the room and it will behave exactly the same, and has a sound (if very confusing conventionally) mathematical cause.

    On a side note, string theory is effectively unfalsifiable and therefore completely useless as a scientific theory.





  • Time doesn’t slow down when you approach the speed of light

    Correct, but only from your perspective. To other people you’ve slowed down, but from where you’re sitting (or careening through the cosmos at the universal speed limit) everything happens just as fast as it normally does.

    the theory we’re using to describe much of the universe is based on a bad premise, that the speed of light is constant.

    Quasi-correct. “The speed of light” as we think of it in physics is actually the speed of information, which dictates how quickly changes can propagate outwards (or put another way: how quickly you can know about something happening elsewhere). We refer to it as the speed of light because photons move at that speed in a vacuum due to having no mass and thus moving at the fastest possible speed, but things like gravitational waves also propagate at that same speed and have nothing to do with EM radiation. However, the speed of information doesn’t change; it’s a hard natural law with no known exceptions.

    Physics in general is cheating for this thread though, because the answer to what makes stuff happen as we understand it is a giant metaphorical mass of “I 'unno.” The Standard Model, relativity, quantum mechanics, string theory, etc all have giant gaping holes in them that other models can often fill, but cannot be properly combined in any way that we’ve tried so far. They’re still correct enough to base your entire life around without any worries, but there’s always that last 0.01% that amounts to the margins of old maps reading “Here There Be Dragons”.






  • You have to sell a new phone every 12-18 months, because otherwise the shareholders eat you alive for not chasing infinite profits. You have to differentiate your new phone from your last phone, even if there are no meaningful changes to be made and the last phone was good enough for everything anyone would ever use it for (as was the one before it, and the one before that, and etc etc). You have to push for people to buy the new phone, because otherwise you don’t make money.

    So you tell the engineers to bump up the clock speeds on the processor 5-10% so you can market it as being faster. You market the phone as being revolutionary for using the USB connector that was forced on you by regulators because your proprietary one was filling landfills with e-waste and pretend like it was your brilliant idea all along. You make sure to limit that USB connector to speeds that were outdated 10 years ago purely so you have a built-in ‘upgrade’ for your next phone where you fix the thing that shouldn’t have been a problem to begin with.

    And then you realize your phone overheats because you overclocked the processor, all to squeeze extra performance out of a chip that 99.9999999999% of users will never notice or need. You’ve made the user experience of your phone worse purely so you could pursue an untenable goal of endless profit, a pattern you will repeat every 12-18 months for the rest of eternity or until the climate wars claim your life.

    Only the most sane and functional economic system.




  • Well, I’d rather the day be sooner than later.

    Agreed, but we’re not the ones making the decision. And the people who are have two options: move forward with a risky, expensive, and potentially career-ending move with no benefits other than the system being a little more maintainable, or continuing on with business-as-usual and earning massive sums of money they can use to buy a bigger yacht next year. It’s a pretty obvious decision, and the consequences will probably fall on whoever takes over after they move on or retire, so who cares about the long term consequences?

    You run months and months of simulated transactions on the new code until you get an adequate amount of bugs fixed.

    The stakes in financial services is so much higher than typical software. If some API has 0.01% downtime or errors, nobody gives a shit. If your bank drops 1 out of every 1000 transactions, people lose their life savings. Even the most stringent of testing and staging environments don’t guarantee the level of accuracy required without truly monstrous sums of money being thrown at it, which leads us back to my point above about risk vs yachts.

    There will come a time when these old COBOL machines will just straight die, and they can’t be assed to keep making new hardware for them.

    Contrary to popular belief, most mainframes are pretty new machines. IBM is basically afloat purely because giant banks and government institutions would rather just shell out a few hundred thousand every few years for a new, better Z-frame than going through the nightmare that is a migration.

    If you’re starting to think “wow, this system is doomed to collapse under its own weight and the people in charge are actively incentivized to not do anything about it,” then you’re paying attention and should probably start extending that thought process to everything else around you on a daily basis.