• 0 Posts
  • 16 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle



  • The modern definition we use today was cemented in 1998, along with the foundation of the Open Source Initiative. The term was used before this, but did not have a single well-defined definition. What we might call Open Source today, was mostly known as “free software” prior to 1998, amongst many other terms (sourceware, freely distributable software, etc.).

    Listen again to your 1985 example. You’re not hearing exactly what you think you’re hearing. Note that in your video example the phrase used is not “Open-Source code” as we would use today, with all its modern connotations (that’s your modern ears attributing modern meaning back into the past), but simply “open source-code” - as in “source code that is open”.

    In 1985 that didn’t necessarily imply anything specific about copyright, licensing, or philosophy. Today it carries with it a more concrete definition and cultural baggage, which it is not necessarily appropriate to apply to past statements.



  • In this thread: people who don’t understand what power is.

    Power isn’t something that is “pushed” into a device by a charger. Power is the rate at which a device uses energy. Power is “consumed” by the device, and the wattage rating on the charger is a simply how much it can supply, which is determined by how much current it can handle at its output voltage. A device only draws the power it needs to operate, and this may go up or down depending on what it’s doing, e.g. whether your screen is on or off.

    As long as the voltage is correct, you could hook your phone up to a 1000W power supply and it will be absolutely fine. This is why everything’s OK when you plug devices into your gaming PC with a 1000W power supply, or why you can swap out a power-hungry video card for a low-power one, and the power supply won’t fry your PC. All that extra power capability simply goes unused if it isn’t called for.

    The “pushing force” that is scaled up or down is voltage. USB chargers advertise their capabilities, or a power delivery protocol is used to negotiate voltages, so the device can choose to draw more current and thus power from the charger, as its sees fit. (If the device tries to draw too much, a poorly-designed charger may fail, and in turn this could expose the device to inappropriate voltages and currents being passed on, damaging both devices. Well designed chargers have protections to prevent this, even in the event of failure. Cheap crappy chargers often don’t.)


  • I understand the concerns about Google owning the OS, that’s my only worry with my chromebook. If Google start preventing use of adblockers, or limiting freedoms in other ways that might sour my opinion. But the hardware can run other OSs natively, so that would be my get-out-of-jail option if needed.

    I’ve not encountered problems with broken support for dev tools, but I am using a completely different tool chain to you. My experience with linux dev and cross-compiling for android has been pretty seamless so far. My chromebook also seems to support GPU acceleration through both Android and Linux VMs, so perhaps that is a device-specific issue?

    I’m certainly not going to claim that chromebooks are perfect devices for everyone, nor a replacement for a fully-fledged laptop or desktop OS experience. For my particular usage, it’s worked out great but YMMV, my main point is that ChromeOS isn’t just for idiots as the poster above seemed to think.

    Also, a good percentage of my satisfaction with it is the hardware and form-factor rather than ChromeOS per se. The same device running Linux natively would still tick most of my boxes, although I’d probably miss a couple of android apps and tablet mode support.


  • People who use Chromebooks are also really slow and aren’t technically savvy at all.

    Nonsense. I think your opinion is clouded by your limited experience with them.

    ChromeOS supports a full Debian Linux virtual machine/container environment. That’s not a feature aimed at non-tech-savvy users. It’s used by software developers (especially web and Android devs), linux sysadmins, and students of all levels.

    In fact I might even argue the opposite: a more technically-savvy user is more likely to find a use case for them.

    Personally, I’m currently using mine for R&D in memory management and cross-platform compiler technology, with a bit of hobby game development on the side. I’ve even installed and helped debug Lemmy on my chromebook! It’s a fab ultra-portable, bullet proof dev machine with a battery life that no full laptop can match.

    But then I do apparently have an IQ of zero, so maybe you’re right after all…








  • > Have you been to an IMAX? Every seat can see the whole screen.

    I don’t understand this comment… Why do you think I’m implying that people can’t see the whole screen from every seat? I don’t see how it’s related to what I said.

    I was simply jokingly observing the irony of sitting at the back to reduce the size of the screen in your field of vision because you have difficulties observing a wide field – a problem that is exacerbated by going to a larger screen in the first place.