I'm probably trying to approach this the wrong way (trying to understand the cause of this error)
I don't get where the 0.21 result is coming from 🤯
Mario non sa fare il lavoro, Paolo sí.
Paolo con il macchinario di Mario potrebbe fare il grosso e poi fare gli aggiustamenti di fino che é in grado di fare con l'esperienza.
Io uso da due anni e mi ha velocizzato molte fasi del lavoro noiose senza impattare sul risultato finale, anzi a volte ho visto approci nuovi e interessanti facendo crescere le mie conoscenze di tubarolo.
É il vibe piping che é una 💩
@𝓜𝓪𝓾𝓻𝓸 𝓥𝓮𝓷𝓲𝓮𝓻 mi unisco agli utenti di friendica che scrivono “la prima”
prima di aver sentito come @Diego Roversi è arrivato al risultato, che è un sistema molto più furbo di quello con cui ero arrivata io
Climate change and energy: We did the math on AI’s energy footprint. Here’s the story you haven’t heard.
technologyreview.com/2025/05/2…
The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next.
— MIT Technology Review
2025 Pet Hacks Contest: Automatic Treat Dispenser Makes Kitty Work For It
hackaday.com/2025/05/14/2025-p…
Treat dispensers are old hat around here, but what if kitty doesn’t need the extra calories — and actually needs to drop some pounds? [MethodicalMaker] decided to link the treat dispens…Hackaday
Oof. On the nose here by @firstdogonthemoon
(Click through to the link to read the full cartoon.)
Metti che una mattina ti viene il bisogno di avere l'elenco dei capoluoghi di provincia e relative posizioni.
E ti viene in mente che queste informazioni su openstreetmap già ci sono. Come fare a estrarle?
Apro il sito overpass-turbo (overpass-turbo.eu/), e vedo che c'è già un esempio che cerca le fontanelle. Ok, sembra semplice, ma se devo fare qualcosa di più complicato?
Guardo in alto e vedo nel menu "Wizard". Permette di generare query complicate partendo da descrizioni semplici in inglese. Con tanto di esempi di ricerche più complicate. Un veloce giro sulla wiki di openstreetmap per vedere i campi giusti per la query, guardo gli attributi di qualche città a caso per vedere qualche esempio e arrivo a questa:
(place=city or place=town) and (capital=2 or capital=4 or capital=6) in italy
e la query generata è:
[out:json][timeout:25];
// fetch area “italy” to search in
{{geocodeArea:italy}}->.searchArea;
// gather results
(
nwr["place"="city"]["capital"="2"](area.searchArea);
nwr["place"="city"]["capital"="4"](area.searchArea);
nwr["place"="city"]["capital"="6"](area.searchArea);
nwr["place"="town"]["capital"="2"](area.searchArea);
nwr["place"="town"]["capital"="4"](area.searchArea);
nwr["place"="town"]["capital"="6"](area.searchArea);
);
// print results
out geom;
Che fondamentalmente vuol dire, città con più di 10.000 abitanti e che siano capoluoghi di provincia(6) / regione(4) /stato(2).
A questo punto il tutto può essere salvato in un file json.
Prossimo passo: usare jq per estrarre solo le informazioni che mi servono.
Nel caso qualcuno fosse curioso. Questo è come estrarre nome e coordinate dall'output, usando jq:
cat province.geojson | jq ".features[] | { name: .properties.name , coord: .geometry.coordinates} " >province.json
Today I will be joining Matt Venn's open source silicon stream and we are going to talk about Greyhound, my latest chip with RISC-V core and embedded FPGA taped out on IHP SG13G2.
🎥 Link to the stream: youtube.com/watch?v=S0drZqEwSN…
Looking forward to your questions!
#OpenSource #ASIC #FPGA
https://www.linkedin.com/posts/leo-moser_asic-fpga-opensource-activity-7317451171571322880-42tJ?utm_source=share&utm_medium=member_desktop&rcm=ACoAADBBa0IBN2...YouTube
The Curse of Knowing How, or; Fixing Everything
notashelf.dev/posts/curse-of-k…
A reflection on control, burnout, and the strange weight of technical fluency.
Curl project founder snaps over deluge of time-sucking AI slop bug reports
Lead dev likens flood to 'effectively being DDoSed' Curl project founder Daniel Stenberg is fed up with of the deluge of AI-generated "slop" bug reports and recently introduced a checkbox to screen low-effort submissions that are draining maintainers' time.…
#theregister #IT
go.theregister.com/feed/www.th…
: Lead dev likens flood to 'effectively being DDoSed'Connor Jones (The Register)
Editor’s Note: previous titles for this article have been added here for posterity.alex.party
I get a lot of emails from people wanting help with math and physics, ranging from students needing advice to completely deranged crackpots. Lately I'm getting more and more crackpots who say they developed their theories with the help of AI.
It gets worse. Check out this article by Miles Klee:
rollingstone.com/culture/cultu…
Strange things are happening when people take ChatGPT too seriously! I don't think humanity is ready for AI, even the primitive sort we have today.
Thanks to @peter for this screenshot from the article.
Marriages and families are falling apart as people are sucked into fantasy worlds of spiritual prophecy by AI tools like OpenAI's ChatGPTMiles Klee (Rolling Stone)
As a grumpy old git -- it says so on my socks, it must be true -- I appreciate a properly grumpy website. Grumpy.website really is.
@Fabrix.xm qualcuno sa che serpente sia?
(la foto è stata scattata nel norditalia, attorno agli 850 m s.l.m.)
@Fabrix.xm somebody elsewhere suggested en.wikipedia.org/wiki/Smooth_s…
(a juvenile one, probably, since it was only ~25 cm long)
@Fabrix.xm qualcuno da altre parti suggerisce it.wikipedia.org/wiki/Coronell…
(probabilmente giovinetto, visto che era lungo sì e no 25 cm)
The response to the predicted crash of the AI sector often is that "every crash leaves something useful behind" and that this time it will be models. I do not think that is the case.
AI models age like milk and the infrastructures left behind won't be ones that I see as helpful for democratic societies.
tante.cc/2025/04/15/these-are-…
After sharing Ed Zitron’s latest piece called “OpenAI Is A Systemic Risk To The Tech Industry” I got a few responses arguing in a similar way: People agree that “AI” and especially “generative AI” is a massive bubble that does not really make much se…tante (Smashing Frames)
"Dear A.I., please make an image without a single elephant in it."
"Roger that, images with elephants in 'em, coming right up!"
"Well, it's nice that we're at least burning the planet for something that works really well and does useful things."
#AI
Get free access to over 60K cat images and breed information. A fully protected and authenticated API to instantly fill your website or app with cat content.thecatapi.com
#Math is really overrated, says #Batman!
smbc-comics.com/comic/battrian…
Saturday Morning Breakfast Cereal - Battriangulationwww.smbc-comics.com
The April Fools joke that might have got me fired
(Old Vintage Computing Research, Tuesday, April 1, 2025)
oldvcr.blogspot.com/2025/04/th…
Everyone should pull one great practical joke in their lifetimes. This one was mine, and I think it's past the statute of limitations. The s...oldvcr.blogspot.com
WHAT—
... "They point to how security researchers hated Visual Basic 6 binaries due to the complexity of reverse engineering the software, the presence of a Lua obfuscation layer in the 2012 Flame malware, and the Grip virus, which contained a Brainfuck interpreter coded in Assembly to generate its keycodes, as examples."
It can only be a matter of time until malware authors stumble across CLC-INTERCAL. And then we'll ALL be sorry!
theregister.com/2025/03/29/mal…
: Miscreants warming to Delphi, Haskell, and the like to evade detectionThomas Claburn (The Register)
My brain has just been hacked reading up on Intercal:
en.wikipedia.org/wiki/INTERCAL
Ive got James Brown's "please, please dont go" looping in my head.
...I darent wonder how many pleases it would take for someody to hand over their cryptowallet?
fwiw, you may find this Makefile interesting:
git.sr.ht/~indieterminacy/1q20…
This is actually what LLMs do best, at its best. It can create beautifully plausible prose to explain even the most nonsensical and asinine things as if it's obvious.
@TerryHancock the GPT has no conception of acknowledging or covering an error, or even the concept of an error. It's just stringing words together in a way that sounds good.
I got a similar result. But could get back to the wrong results when "pressing" ChatGPT that its answer was wrong.
infosec.exchange/@realn2s/1146…
Actually, I find the different results even more worrying. A consistent error could be "fixed" but random error are much harder or impossible to fix (especially if they are an inherent propertiies of the system/LLMs)
Claudius Link
2025-06-05 06:51:09
@realn2s @Mastokarl @JonathanGulbrandsen @oli @cstross
I assume the guy who came up with the stochastic parrot metaphor is very embarrassed by it by now. I would be.
(Completely ignoring the deep concept building that those multi-layered networks do when learning from vast datasets, so they stochastically work on complex concepts that we may not even understand, but yes, parrot.)
@Mastokarl @realn2s @JonathanGulbrandsen @oli @cstross
But you're evidently gullible enough to have fallen for the grifter's proposition that the text strings emerging from a stochastic parrot relate to anything other than the text strings that went into it in the first place: we've successfully implemented Searle's Chinese Room, not an embodied intelligence.
en.wikipedia.org/wiki/Chinese_…
(To clarify: I think that a general artificial intelligence might be possible in principle: but this ain't it.)
thought experiment arguing that a computer cannot exhibit "understanding"
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)Agree. I'm more and more convinced that today's chatbots are just an advanced version of ELIZA, fooling the users and just appearing intelligent
en.wikipedia.org/wiki/ELIZA
I wrote a thread about it infosec.exchange/@realn2s/1117…
where @dentangle fooled me using the ELIZA technics
Claudius Link (@realn2s@infosec.exchange)
Infosec ExchangeI'm not sure of the "difference"
Different in pure dimension for sure (molehill vs mountain).
On a higher level:
ELIZA used keywords with a rank which, together with the relations to the output sequences were hardcoded in the source.
LLM use tokens with a probability which, together with the relations to the output tokes sequences are determined though training data
Closing with a anecdote from the wiki page:
Weizenbaum's own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation. Weizenbaum was surprised by this, later writing: "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."
Thought about this some more. No, I do not think that using the Chinese room thought experiment you will ever accept a computer program‘s behavior as intelligent, no matter how much evidence for its intelligent behavior you get. Because per definition there‘s an algorithm executing it.
I don’t agree, because I don‘t buy into the human exceptionalism that we meat machines have some magic inside of us that gives us intent the machines can‘t have.
$ python3
Python 3.13.3 (main, Apr 10 2025, 21:38:51) [GCC 14.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 9.11 - 9.9
-0.7900000000000009
That's closer than 0.21, but it still isn't correct. You need decimal arithmetic to get the correct answer:
jshell> new BigDecimal("9.11").subtract(new BigDecimal("9.9"))
$1 ==> -0.79