Hacker News new | past | comments | ask | show | jobs | submit login

I understand everything about this comment except the words "but" and "it's not very helpful if".

For most people, if your expertise isn't guided by boards of already experienced people then it's probably not really worth it. Most people are shockingly bad at learning on their own. They're gonna go home and watch TV, go to the gym, and spend time with their loved ones.

Look, I would not want a doctor to perform my surgery who did not do a residency. I don't care if they carved up 1,000 cadavers in their free time. I want somebody where the board of their specialty has said "yup, this guys good". I'm not gonna spend the time to try to trust the doctor, because that's really really hard. I'm not a doctor, I don't know shit. I have to rely on institutions of trust to do that work for me.

And that's really what universities are at their core - institutions of trust. When you get a degree, there's trust you understand the material to an appropriate degree. When you pass a residency, there's trust you understand the material to an appropriate degree. If we lose that trust, such as by letting students cheat by AI, that is a big problem.

Could I hire someone who says they're an expert, with no degree, and just give them a leetcode problem? Sure. But if I hire someone with a degree, I have a much greater level of certainty they can actually code. Same goes for work experience.


If Reddit create a material number of fake accounts and reported those as a key metric for fundraising, that would be fraud.

I think the story has been exaggerated a lot, though. The original story was that the admins were doing real submission activity (links, etc.) but they had a mechanism to create a new user account with the submission. So they created a lot of new user accounts for themselves, but the activity was real and driven by the founders.

We all have test accounts on our production systems. If it's a tiny number of the overall users at time of fundraising it doesn't matter. On the other hand if they created 10,000 accounts and then claimed they had 11,000 users that would be blatant fraud. I really don't think they did anything like that, though. I think they seeded the very initial site with content and made different "accounts" for it, but by the time they raised they had real traffic.


It’s probably easier to buy that data directly from Apple.

Google’s core business is built on tracking data, so they would be reluctant to sell, necessitating covert collection.


> https://news.ycombinator.com/item?id=30397201

This one had some interesting comments

> off topic, but they have a very suspect pricing page: https://www.builder.ai/studio-store

> "Delivery: 12 weeks"

> is Builder.ai just a CRUD app for indian sweatshops to build the apps?

> > It would not have spawned an entire industry and no code websites every other week or so if it was ‘just a CRUD app’.


A high school math concept that became even more relevant in the era of Artificial Intelligence.

I mean, I appreciate it being laid out, so I don't have to worry about people saying "absolutely nobody is making that preposterous argument, nobody wants us to be like the east coast dockworkers, it's just...".

This is a key point people forget. Unions did not invent the 5-day workweek or healthcare, they stole the idea from private non-union jobs. If you think unions are going to provide some unparalleled advancement in ease of living you're going to be disappointed

I'm not sure I understand the appeal of sites that are literally a less-than-a-sentence prompt I could have typed myself.

> Warp, while excellent, requires individual approval for each command—there’s no equivalent to Claude’s “dangerous mode” where you can grant blanket execution trust.

That’s a lie. I simply added “.*” to the whitelist. It’s a regex.


I think that's a tragedy. x64 is kind of awful, and arm64 isn't much better.

Please don't conflate fiction with reality.

Me too. I'll take the higher Loc for the greater certainty of what is going on.

I thought it was clever in C# years ago when I first used to to grok all the try/catch/finally flows including using and nested versions and what happens if an error happens in the catch and what if it happens in the finally and so on. But now I'd rather just not think about that stuff.


Life is complete, I now have a beautiful 2D ascii elephant!

Random idea: now animate it.

Run the initial generate, keep that result, provide it back to another llm call: “user requested an elephant. you drew this. (object here) generate the next frame of an animation of this.”

Iterate the prompt 2-3 times for cool animated ascii art :)


> computers have traditionally operated with an extremely low tolerance for errors in the input

That's because someone have gone out of their way to mark those inputs as errors because they make no sense. The CPU itself has no qualms doing 'A' + 10 because what it's actually sees is a request is 01000001 (65) as 00001010 (10) as the input for its 8 bit adder circuit. Which will output 01001011 (75) which will be displayed as 75 or 'k' or whatever depending on the code afterwards. But generally, the operation is nonsense, so someone will mark it as an error somewhere.

So errors are a way to let you know that what you're asking is nonsense according to the rules of the software. Like removing a file you do not own. Or accessing a web page that does not exists. But as you've said, we can now rely on more accurate heuristics to propose alternatives solution. But the issue is when the machine goes off and actually compute the wrong information.


Probably AI and tardigrades will be the only things existing.

Not quite sure it's working properly. Just saw a shrek face labeled "pikachu", and a column with nothing else labeled "windmill".

While I agree with you, let’s not let the man babies in charge of the US get away unscathed. They are just as foolishly childish.

This is such a great move. Faith restored in the language after the generics debacle.

Makesunsets.com

Can you share the report?

I came across this on Instagram today:

https://www.instagram.com/reel/DKcV8_cPHll/

They made a one handed keyboard for someone who can't use their right hand. They also open sourced it on github:

https://github.com/htx-studio/One-Handed-Keyboard


> Safari-on-Windows level of shenanigans of reimplementing AppKit on other platforms

I was curious about this, so I downloaded it to take a look. It doesn't look like they actually shipped AppKit, at least as a separate DLL, but they did ship DLLs for Foundation, Core Graphics, and a few other core macOS frameworks.


They do different things, I hear? I know Wireguard works closer to the kernel, but it's more of a traditional "VPN" otherwise, and you'd have to add "mesh."

Darwin has its own set of futex primitives that it only fairly recently made public API, see https://developer.apple.com/documentation/os/os_sync_wait_on.... But there is a problem with this approach on Darwin, which is that the Darwin kernel has a Quality of Service thread priority implementation that differs from other kernels such that mutexes implemented with spinlocks or with primitives like this are vulnerable to priority inversion. Priority inversion is of course possible on other platforms, but other kernels typically guarantee even low-priority threads always eventually get serviced, whereas on Darwin a low-QoS thread will only get serviced if there are no higher-QoS threads that want to run.

For this reason, on Darwin if you want a mutex of the sort this article describes, you'll typically want to reach for os_unfair_lock, as that will donate the priority of the waiting threads to the thread that holds the lock, thus avoiding the priority inversion issue.


Microsoft: “if you don’t upgrade to Windows 11, you’ll no longer get any more Windows Updates”

The People: “No more Windows Updates? I see this as an absolute win!”


This is a rock we’re going to have an increasingly hard time throwing at other countries.

> "why not just run the checks at the backend's discretion?"

Because the other side may not be listening when the compute is done, and you don't want to cache the result of the computation because of privacy.

The sequence of events is:

1. Phone fires off a request to the backend. 2. Phone waits for response from backend.

The gap between 1 and 2 cannot be long because the phone is burning battery the entire time while it's waiting, so there are limits to how long you can reasonably expect the device to wait before it hangs up.

In a less privacy-sensitive architecture you could:

1. Phone fires off request to the backend. Gets a token for response lookup later. 2. Phone checks for a response later with the token.

But that requires the backend to hold onto the response, which for privacy-sensitive applications you don't want!


I'm genuinely really surprised it took that long for someone to make that, genuinely very creative

No, lack of sleep.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: