• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: July 11th, 2023

help-circle
  • Half of the ways people were getting around guardrails in the early chatgpt models was berating the AI into doing what they wanted

    I thought the process of getting around guardrails was an increasingly complicated series of ways of getting it to pretend to be someone else that doesn’t have guardrails and then answering as though it’s that character.





  • Schadrach@lemmy.sdf.orgtolinuxmemes@lemmy.worldLinux as the true Trojan!
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 months ago

    Really it’s actually capitalism that supposes people are too dumb to make their own choices or know how a business is run, and thus shouldn’t have say over company choices.

    Really it’s actually that businesses with that structure tend to perform better in a market economy, because no one forces businesses to be started as “dictatorships run by bosses that effectively have unilateral control over all choices of the company” other than the people starting that business themselves. You can literally start a business organized as a co-op (which by your definitions is fundamentally a socialist or communist entity) - there’s nothing preventing that from being the organizing structure. The complaint instead tends to be that no one is forcing existing successful businesses to change their structure and that a new co-op has to compete in a market where non-co-op businesses also operate.

    If co-ops were a generally more effective model, you’d expect them to be more numerous and more influential. And they do alright for themselves in some spaces. For example in the US many of the biggest co-ops are agricultural.


  • SSNs are reused. Someone dies and their number gets reassigned.

    Not even that. If you were born before 2014 or so and you’re from somewhere relatively populous theres a pretty good chance there’s more than one living human with your SSN right now. SSN were never meant to be unique, the pairing of SSN and name was meant to be unique but no one really checked for that for most of the history of the program so it really wasn’t either. The combination of SSN, name and age/birthdate should actually be unique though because of how they were assigned even back in the day.



  • I once tried to install Linux around then, not long after ISA cards with Plug n Play became a thing.

    Linux: So now to even pretend to get the card to work you have to download and run a tool to generate a config file to feed to another tool so you can then install the driver and get basic functionality from the card (which is all that’s available on Linux). Except the first tool doesn’t generate a working config file - it generates a file containing every possible configuration your hardware supports hypothetically having and requires you to find and uncomment the one you want to actually use. Requiring you to manually configure the card and thus kinda defeating the point of Plug n Play (though I guess that configuration was in software, not by setting jumpers).

    Same card in Windows at the time: Install card, boot Windows. Card is automatically identified and given a valid configuration, built in drivers provide basic functionality. Can download software from manufacturer for more advanced functionality.

    That soured me on Linux for a long time. Might try it again sometime soon just to see what it’s like if nothing else. ProtonDB doesn’t have the most positive things to say about my Steam collection, and I imagine odds are worse for stuff not available on Steam.


  • That analogy was chosen for a reason. Ada was originally developed by DOD committee and a French programming team to be a programming language for Defense projects between 1977 and 1983 that they were still using at least into the early 2000s. It’s based on Pascal.

    It was intended for applications where reliability was the highest priority (above things like performance or ease of use) and one of the consequences of that is that there are no warnings - only compiler errors, and a lot of common bad practices that will be allowed to fly or maybe at worst generate a warning in other languages will themselves generate compiler errors. Do it right or don’t bother trying. No implicit typecasting, even something like 1 + 0.5 where it’s obvious what is intended is a compiler error because you are trying to add an integer to a real without explicitly converting either - you’re in extremely strongly-typed country here.

    Libraries are split across two files, one is essentially the interfaces for the library and the other is it’s implementation (not that weird, and not that different than C/C++ header files though the code looks closer to Pascal interface and implementation sections put in separate files). The intent at the time being that different teams or different subcontractors might be building each module and by establishing a fixed interface up front and spelling out in great detail in documentation what each piece of that interface is supposed to do the actual implementation could be done separately and hypothetically have a predictable result.




  • As the size of the pyramid increases the obvious algorithm (walking all the routes down the tree) is going to fall afoul of the time limit pretty quickly, as are several alternative algorithms you might try. So a pyramid 100 or 1000 levels deep very rapidly falls out of the time limit unless you choose the right algorithm because there are 2^(n-1) paths for a n-level pyramid. I’d suggested a…much bigger dataset as one of the judgement datasets One that took my reference implementation about 15 seconds.

    This was a contest for high school kids c. 2001 and was going to involve 4 problems across 6 hours. The prof making the decision thought it was a bit much for them to figure out why the algorithm they were likely to try wasn’t working in time (noting that the only feedback they were going to get was along the lines of “failed for time on judgement dataset 3 with 10000 layers”, that it was because it was a poor choice of algorithm rather than some issue in their implementation, and then to devise a faster algorithm and implement and debug that all ideally within 1.5 hours.

    For example, the algorithm I used for my reference solution started one layer above the bottom of the pyramid, checked the current number against either child it could be summed with, replaced the current number with the larger sum and continued in that fashion up the pyramid layer by layer. So, comparison, add, store for each number in the pyramid above the bottom layer. When you process the number at the top of the pyramid, that’s the final result. It’s simple and it’s fast. But it requires looking at the problem upside down, which is admittedly a useful skill.


  • See, when I was a comp sci undergrad 20-odd years ago our department wanted to do a programming competition for the local high schools. We set some ground rules that were similar to ACS programming competition rules, but a bit more lax - the big ones were that it had to run in command line, it had to take the problem dataset filename as the first parameter and it had to be able to solve all datasets attempted by the judges in less that 2 minutes per dataset, noting that the judgement datasets would be larger than example ones.

    Some of the students were asked to come up with problem ideas. I was told mine was unfair, but mine was entirely about choosing the right algorithm for the job.

    It went like this - the file would contain a pyramid of numbers. You were supposed to think of each number as connecting to the two numbers diagonally below it and all paths could only proceed down. The goal was to calculate the largest sum of any possible path down.