We gave 5 LLMs $100K to trade stocks for 8 months
aitradearena.com
CUDA-l2: Surpassing cuBLAS performance for matrix multiplication through RL
github.com
Multivox: Volumetric Display
github.com
State of AI: An Empirical 100T Token Study with OpenRouter
openrouter.ai
Transparent leadership beats servant leadership
entropicthoughts.com
SMS Phishers Pivot to Points, Taxes, Fake Retailers
krebsonsecurity.com
It’s time to free JavaScript (2024)
javascript.tm
Why are 38 percent of Stanford students saying they're disabled?
reason.com
Plane crashed after 3D-printed part collapsed
bbc.com
Thoughts on Go vs. Rust vs. Zig
sinclairtarget.com
How elites could shape mass preferences as AI reduces persuasion costs
arxiv.org
PyTogether: Collaborative lightweight real-time Python IDE for teachers/learners
github.com
Show HN: Onlyrecipe 2.0 – I added all features HN requested – 4 years later
onlyrecipeapp.com
I ignore the spotlight as a staff engineer
lalitm.com
Fighting the age-gated internet
wired.com
Countdown until the AI bubble bursts
pop-the-bubble.xyz
Autism should not be treated as a single condition
economist.com
Converge (YC S23) is hiring a martech expert in NYC
runconverge.com
Some models of reality are bolder than others
cjauvin.github.io
Pink Lexical Slime: The Dark Side of Autocorrect
cyberdemon.org
The RAM shortage comes for us all
jeffgeerling.com
> The metric reflects the proportion of all tokens served by reasoning models, not the share of "reasoning tokens" within model outputs.
I'd be interested in a clarification on the reasoning vs non-reasoning metric.
Does this mean the reasoning total is (input + reasoning + output) tokens? Or is it just (input + output).
Obviously the reasoning tokens would add a ton to the overall count. So it would be interesting to see it on an apples to apples comparison with non reasoning models.