One of the cofounders of partizle.com, a Lemmy instance primarily for nerds and techies.

Into Python, travel, computers, craft beer, whatever

  • 1 Post
  • 21 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle
  • Well, that’s always been the case with Skid Row, though it might be debatable which came first – the homeless encampments or the aid agencies. And for that matter, there were Hoovervilles in the Great Depression. In any city in America, there are transients milling around the shelters, which is why there’s so much NIMBYism over developing new shelters.

    But what’s going on in California probably has more to do with the fact that LA and San Francisco tend to be very tolerant of the homeless encampments and provide generous aid, thus inducing demand. The homeless population is soaring across America for various reasons, but California is a desirable place to be homeless: better aid, better climate, softer police, etc.

    Maybe California’s big cities really are more humane and generous, but at this point it’s to the detriment of livability in those places.




  • bouncing@partizle.comtoAsklemmy@lemmy.mlWhy do people dislike California?
    link
    fedilink
    arrow-up
    5
    arrow-down
    5
    ·
    edit-2
    2 years ago

    It sort of depends on where you are, but in San Francisco and Los Angeles, the homeless problem is noticeably worse than almost anywhere else in America. It’s bad.

    An ex of mine lives in a pretty posh part of LA (Crestview). She works constantly and really hard to afford to live there. Now there are people literally shooting heroin on the street outside her home and to take her toddler to play at the park, they’re basically walking around the bodies of people high/sleeping.

    I mean, I’m as anti-drug war as they come, but that’s no way to live and the police really should clear it out. Even in the poorer parts of most other cities, that’s not something you see.




  • The comparison is between today and ‘today but without the highway’, not between today and before the highway was built. If the population increase is greater with the highway there, that’s still part of the induced demand.

    I wouldn’t suggest that highways never induce demand, but the idea that people are driving more in Boston because of the Big Dig seems doubtful to me.

    A city being “bad for drivers” is not a great indicator of it not being car dependant. Cities in the Netherlands are probably the most walkable and bikable on the planet, and also great to drive in because there are hardly any cars.

    The Netherland has pretty robust car infrastructure too.

    And I agree; a city can be bikable, walkable, and drivable all at once. That should be the goal.


  • Do you think the total car traffic in the Boston area today is greater than it would have been had the Big Dig not been built? If yes, the ‘infrastructure naysayers’ were correct.

    It’s probably gone down, actually, at least in per capita terms. Boston’s population is a lot bigger than it used to be, so that has to be taken into account.

    Keep in mind, the Big Dig actually reduced the total number of highway ramps, which is part of why it increased traffic flow. And by reclaiming neighborhoods from elevated highways, it reconnected areas. You can easily walk places that were not possible before.

    But they still deepen the overall car dependency. Investing in rail-bound transportation while imposing heavy fees on car traffic into the city would likely be a better use of resources.

    Boston is far from car dependent; it’s probably one of the worst cities in America for drivers, and best for cyclists and pedestrians.



  • bouncing@partizle.comtoMildly Interesting@lemmy.world"Progress"
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    2 years ago

    That’s surprising to me. I remember at the time, NBC Nightly News and PBS Newshour (my family’s news diet in the 90s) did stories about it, and they both definitely mentioned reclaiming city space as one of the benefits.

    I think the Big Dig, while it ended up costing several times what it was supposed to, will go down in history as one of the best highway projects of its era. It also proved infrastructure naysayers wrong. A lot of people insist that any highway projects always just induce demand, resulting in even more congestion, but the Big Dig did nothing of the sort. To this day, 30 years on, Boston traffic is still not as bad as it was pre-Big Dig.


  • > If I created a web app that took samples from songs created by Metallica, Britney Spears, Backstreet Boys, Snoop Dogg, Slayer, Eminem, Mozart, Beethoven, and hundreds of other different musicians, and allowed users to mix all these samples together into new songs, without getting a license to use these samples, the RIAA would sue the pants off of me faster than you could say “unlicensed reproduction.”

    The RIAA is indeed a litigious organization, and they tend to use their phalanx of lawyers to extract anyone who does anything creative or new into submission.

    But sampling is generally considered fair use.

    And if the algorithm you used actually listened to tens of thousands of hours of music, and fed existing patterns into a system that creates new patterns, well, you’d be doing the same thing anyone who goes from listening to music to writing music does. The first song ever written by humans was probably plagiarized from a bird.








  • You’re getting lost in the weeds here and completely misunderstanding both copyright law and the technology used here.

    First of all, copyright law does not care about the algorithms used and how well they map what a human mind does. That’s irrelevant. There’s nothing in particular about copyright that applies only to humans but not to machines. Either a work is transformative or it isn’t. Either it’s derivative of it isn’t.

    What AI is doing is incorporating individual works into a much, much larger corpus of writing style and idioms. If a LLM sees an idiom used a handful of times, it might start using it where the context fits. If a human sees an idiom used a handful of times, they might do the same. That’s true regardless of algorithm and there’s certainly nothing in copyright or common sense that separates one from another. If I read enough Hunter S Thompson, I might start writing like him. If you feed an LLM enough of the same, it might too.

    Where copyright comes into play is in whether the new work produced is derivative or transformative. If an entity writes and publishes a sequel to The Road, Cormac McCarthy’s estate is owed some money. If an entity writes and publishes something vaguely (or even directly) inspired by McCarthy’s writing, no money is owed. How that work came to be (algorithms or human flesh) is completely immaterial.

    So it’s really, really hard to make the case that there’s any direct copyright infringement here. Absorbing material and incorporating it into future works is what the act of reading is.

    The problem is that as a consumer, if I buy a book for $12, I’m fairly limited in how much use I can get out of it. I can only buy and read so many books in my lifetime, and I can only produce so much content. The same is not true for an LLM, so there is a case that Congress should charge them differently for using copyrighted works, but the idea that OpenAI should have to go to each author and negotiate each book would really just shut the whole project down. (And no, it wouldn’t be directly negotiated with publishers, as authors often retain the rights to deny or approve licensure).


  • >> Isn’t learning the basic act of reading text?

    > not even close. that’s not how AI training models work, either.

    Of course it is. It’s not a 1:1 comparison, but the way generative AI works and the we incorporate styles and patterns are more similar than not. Besides, if a tensorflow script more closely emulated a human’s learning process, would that matter for you? I doubt that very much.

    >> Thousands of authors demand payment from AI companies for use of copyrighted works::Thousands of published authors are requesting payment from tech companies for the use of >> their copyrighted works in training artificial intelligence tools

    Having to individually license each unit of work for a LLM would be as ridiculous as trying to run a university where you have to individually license each student reading each textbook. It would never work.

    What we’re broadly talking about is generative work. That is, by absorbing one a body of work, the model incorporates it into an overall corpus of learned patterns. That’s not materially different from how anyone learns to write. Even my use of the word “materially” in the last sentence is, surely, based on seeing it used in similar patterns of text.

    The difference is that a human’s ability to absorb information is finite and bounded by the constraints of our experience. If I read 100 science fiction books, I can probably write a new science fiction book in a similar style. The difference is that I can only do that a handful of times in a lifetime. A LLM can do it almost infinitely and then have that ability reused by any number of other consumers.

    There’s a case here that the renumeration process we have for original work doesn’t fit well into the AI training models, and maybe Congress should remedy that, but on its face I don’t think it’s feasible to just shut it all down. Something of a compulsory license model, with the understanding that AI training is automatically fair use, seems more reasonable.