Nvidia Security Team: “What if we just stopped using C?” (2022)
183 comments
·February 10, 2025jasonpeacock
roenxi
People who don't want to read the first line of the article, rejoice!
It sounds like a good decision though. Formal verification tends to make more sense in highly concurrent situations where there is a high-value stable API to be supported. That describes graphics cards and surrounding environments really well so it makes sense that Nvidia would find use cases.
Something more from the article is also that they made the decision with a pilot project that went well. It'd be nice to have more details about it; that sort of thing is important to do well and always interesting. Lo and behold, we do have more details about it! https://blog.adacore.com/when-formal-verification-with-spark... - there is a video in there that looks to be worth a watch.
MichaelDickens
Nothing in the article explained what SPARK is. I appreciated GP's comment.
alberth
I really wish there was more active development on SPARK.
It's largely untouched over the last 10-years.
https://github.com/AdaCore/spark2014
For those not aware, SPARK gives even more safeguards/guarantees than Rust (see below) ... and also does it at compile time (when not all of Rust safeguards are at compile time).
https://blog.adacore.com/should-i-choose-ada-spark-or-rust-o...
RachelF
They actually had to use the non-cutdown version of SPARK that this company sells, called SPARK Pro.
kevlar700
SPARK is free for all to use and is open source. They chose to pay for Adacores pro support services and verified pro Ada compiler over the FSF GCC/Gnat Ada compiler. Spark is actually part of the Gnat compiler (compatibility) but actually the slower analysis is done by gnatprove thereby keeping compilation and iterative development fast. Nvidia can certainly afford to do so of course.
simon4ada
AdaCore's pro support includes more recent releases and, in case of problems, wavefronts. That said, the free version is fairly recent, and you can get community (and sometimes vendor) support.
mbonnet
SPARK has some adherents in spacecraft programming for its formal verifiability and safety features.
rs186
[flagged]
Tadpole9181
I'm with you in spirit, but NVIDIA Spark is a thing insofar as RAPIDS?
https://www.nvidia.com/en-us/deep-learning-ai/solutions/data...
antonvs
Yes, but that has nothing to do with the Spark language that Nvidia is talking about in the OP: https://en.wikipedia.org/wiki/SPARK_(programming_language)
Seems like a great example of LLMs getting confused by language and lacking any actual understanding, although I haven't yet looked into this particular case.
cxr
I maintain conviction in my position that if AdaCore and/or other champions of Ada were to undertake a serious project (i.e. not an April Fools' Day joke blog post[1]) to ship a compiler that accepts a curly-braced dialect of the language[2][3], then there would be at least twice as many new, earnest users of the vulgar "skinned" form as there are programmers today using the existing language, and that this tripling would happen virtually overnight.
I'm not even talking about major revisions to the grammar. I'm saying fork the lexer to accept a different set of terminals, and leave the nonterminals alone.
Is this stupid and unreasonable and should programmers just get over themselves and use a language on the strength of its merits instead of being turned off by surface-level stuff? Yeah, it is, and yeah, they should. But people are fickle.
1. <https://blog.adacore.com/a-modern-syntax-for-ada>
rramadass
Not related to Ada, but Dafny is a "Verification aware programming Language" where both Formal Specification and Verification are combined together in the same source language - https://dafny.org It uses curly braces and can generate code for a number of different languages at the backend (eg. Java, C#, Go, Python, Javascript, C++).
I haven't played with it yet but just from reading the documentation i really like this approach since you need learn only one language/toolchain for both formal specification/verification and implementation. But by giving one the flexibility to generate code to various mainstream languages you can slowly add verified code module by module to existing codebases.
shipp02
You are right but the vhdl-like syntax is quite nice. It does some surprisingly modern things.
NavinF
"is begin" and "end;" are the opposite of nice/modern
coliveira
Nothing prevents anyone from creating a preprocessor that maps { to BEGIN and } to END.
timbit42
Derivatives of Pascal such as Modula-2, Oberon, Lua, and Ada do not require BEGIN on IF, FOR, WHILE, LOOP or CASE statements, making the syntax much cleaner.
cxr
Nothing prevents anyone from trying to eat spaghetti with a spoon. (Or from modifying the lexer to accept a different set of terminals, even.)
> Kramer: "Yeah, well, I could do it. I don't wanna do it."
> Jerry: "We didn't bet if you wanted to do it. We bet on if it would be done."
> Kramer: "And― and it could be done."
> Jerry: "Well of course it COULD be done. Anything COULD be done. But it's only is done if it's done! Show me the levels! The bet is the levels!"
> Kramer: "I don't want the levels!"
> Jerry: "That's the bet!"
SSLy
Spaghetti is eaten with spoon and fork…
windward
Impelementing the grammar is the easiest 0.1% of programming language dissemination.
Adding explicit non-const to g++ would be so, so easy. But pointless.
juped
I'm also desperate to stop Rust by any means, but I don't think you're correct; I think the sticking point for language adoption is the presence or absence of a big npm clone.
steveklabnik
Alire has existed for a long time now; the oldest release on GitHub is 0.4 from 2018.
devit
That would be a good start as the current syntax is absurd.
marssaxman
It's not absurd, it's just out of fashion. Ada syntax comes from the same heritage as Pascal, which was as popular as C during the 80s (and sometimes even more so).
mbonnet
Pascal died for several very good reasons.
timbit42
Derivatives of Pascal such as Modula-2, Oberon, Lua, and Ada do not require BEGIN on IF, FOR, WHILE, LOOP or CASE statements, making the syntax cleaner and conciser than Pascal.
NavinF
Article doesn't say what parts of Nvidia's stack use SPARK. Considering Nvidia is a huge software company with ~30,000 employees, "There are now over fifty developers trained and numerous components implemented in SPARK" doesn't inspire confidence.
IMO the realistic path towards formal verification is AI proof assistants that automate the tedious parts instead of forcing you to write your code in a weird way that's easier to prove
btown
I'd also add that AI code generation in non-formally-verifiable languages, at places with concurrency requirements like Nvidia, might end up in the short-term creating more hard-to-spot concurrency bugs than before, as developers become incentivized to tab-complete code without fully thinking through the implications as they type.
But AI code generation for formally verifiable programs? And to assist in writing custom domain-specific verifiers? Now that's the sweet spot. The programming languages of the future, and the metaprogramming libraries on top of them, will look really, really cool.
stefan_
Have you tried it for Rust, arguably a much lower bar than actually fully formally verifiable? Sorry to say but AI can't figure out the borrow checker. I'd speculate it does poor on a lot of things that are sort of implicit in the code / properties of the grand structure rather than incremental text things.
kevlar700
There is a podcast and blackhat? video about Nvidia choosing SPARK over Rust. Not because of formal verification at all but because it is a more developed prospect and offers better security even without any formal verification. This isn't mentioned but Ada is also far better at modelling registers or even network packets.
I might consider AI if it utilised SPARKs gnat prove but I wouldn't usw AI otherwise.
siknad
Maybe better performance can be achieved with specialized models. There are some that were able to solve mathematical olympiad problems, e.g. AlphaProof.
TheOtherHobbes
AI means some future more rigorous system than today's Clippy++ LLMs.
At least that's what it should mean. It's not clear if that's going to happen.
steveklabnik
> Sorry to say but AI can't figure out the borrow checker. I'd speculate it does poor on a lot of things that are sort of implicit in the code / properties of the grand structure rather than incremental text things.
I don’t really use AI tools, but in the past few weeks I’ve tried it with Rust and while it had problems, borrow checking errors were never part of them.
I fully agree that LLMs don’t “understand,” but people also really oversell the amount of thinking needed to satisfy borrowing in a lot of cases.
nostrademons
I had a production bug (caught in canary) introduced into Google Play in just this manner. The AI code review assist suggested switching a method to a more modern alternative, both the author and code reviewer thought the AI suggestion was a good one, the new alternative was documented as not being safe to use in the onCreate() method of an Android activity, the code in question was not in onCreate but was in a method transitively called by onCreate in a different Activity, boom goes an unrelated Activity. Would've been caught trivially by an expressive type system, but the LLM doesn't know that.
erichocean
> But AI code generation for formally verifiable programs?
For verifiable domains, this really is the sweet spot.
An annoying aspect of verification-in-practice is that it is just really bulky—there's a lot to type in, and it's tedious.
LLMs, especially the latest crop of weak-reasoning models, are great for this.
atiedebee
I tested o3-mini yesterday, having it verify a bit-hack for "vectorizing" 8x8bit integer addition using a single 64 bit int value[0]. Suffice to say, I am not impressed. I asked it to give me a counter example that would make the function fail, or to tell me that it works if it does. It mentioned problems regarding endianness which weren't present, it mentioned carries that would spill over which couldn't happen. I had given it chances to give counter examples, but the counterexamples he gave didn't fail.
Only after telling it that I tested the code and that it works did it somewhat accept that the solution worked.
I think a deterministic, unambiguous process is a lot more valuable for formal verification.
[0] https://chatgpt.com/share/67aefb63-f60c-8002-bfc6-c7c45b4520...
dmix
> as developers become incentivized to tab-complete code without fully thinking through the implications as they type.
Is this true? There should always be a QA process and you should always carefully review when committing. The writing process doesn't influence that unless you're automating whole swaths you don't understand during reviews.
dogma1138
I’m not entirely sure they are writing any portion of their driver using SPARC the presentation they gave 5 years ago seems to indicate they are limiting the usage to firmware for their embedded RISC-V co-processor, they may have expanded the usage of it but I still think it’s predominantly firmware related with possibly some expansion to their automotive and robotics solutions.
https://www.slideshare.net/slideshow/securing-the-future-of-...
ipsum2
SPARC and SPARK are two entirely different things.
dogma1138
Indeed they are, the misspelling is on my end :)
null
lou1306
It bears repeating that Nvidia is not strictly speaking a SW company. It is a semiconductors company. That 30k employees figure includes all Nvidia employees, not all of which are in software.
I am rather skeptical of AI in this context. Until you have verifiably correct AI assistants, you still need a highly skilled human in the loop to catch subtle errors in the proofs or the whole result is moot anyway.
And there _are_ tools out there capable of verifying C code, but for a greenfield project implementation/verification in a higher-level, formal language + verified compilation might make more sense.
transpute
> Article doesn't say what parts of Nvidia's stack use SPARK.
Their linked case study lists three examples and one category, https://www.adacore.com/uploads/techPapers/222559-adacore-nv...
- image authentication and integrity checks for the overall GPU firmware image
- BootROM and secure monitor firmware
- formally verified components of an isolation kernel for an embedded operating system
- In general, their targets tend to be smaller code bases that would benefit the most from SPARK’s strong typing, absence of runtime errors, and in some cases, rigorous formal verification of functional properties
More details in 2021 talk on RISC-V root of trust in Nvidia GPUs, https://www.youtube.com/watch?v=l7i1kfHvWNI> NVRISCV is NVIDIA’s implementation of the RISC-V ISA and Peregrine subsystem includes NVRISCV and multiple peripherals. They show how fine-grain access controls, formally verified for correctness, allow following the principle of least privilege for each partition. NVRISCV provides secure boot that starts with an immutable HW, the chain of trust extends to the Secure Monitor in SW, where partition policies are set up and isolation enforced using HW controls.. Boot and Secure Monitor software is implemented in SPARK.
swiftcoder
The article seems fairly clear that it is the security folks within Nvidia that are spearheading this. 50 engineers on the security team doesn't seem unreasonable for a company of that size.
NavinF
I don't believe that: https://news.ycombinator.com/item?id=43042166
zitterbewegung
I feel like it’s going to be a step further and we won’t write actual “code” but more like comprehensive tests and it generates the code . Sort of like the movement from assembly to C
bawolff
Historically AI has been pretty bad with this. Machine learning is famous for finding solutions that pass all the test cases in out of the box ways but don't do what you want.
NavinF
That hasn't been my experience. I've mostly seen fake stories along those lines. Eg https://gwern.net/tank
makeitdouble
Writing comprehensive tests is the part we're weakest at IMHO.
That's the same paradigm as outsourcing development at some cheap place and doing acceptance tests against the result. It saves money but that's not how you'd build an airplane for instance...
randomNumber7
> that's not how you'd build an airplane for instance...
Unless you want it going boing boing boeing.
Jean-Papoulos
Unfortunately humans will probably stay very bad at writing tests that covers all possible cases. And who's gonna test the tests ?
antirez
Totally agree that AI is going to have a huge impact on security of languages, and will change many paradigms.
UltraSane
formal verification is normally used on the most security critical code so even 50 programmers is a lot.
r1chardnl
After briefly skimming through their talk they mention they're a third party consultancy company making recommendations on safety for NVIDIA, so "Nvidia Security Team" take it how you will. Also I think these drastic changes moving towards a completely different language always bug me, especially when the syntax is also completely different.
Then there's like everything libraries wise and CUDA is all C/C++ if I'm not mistaken. In every large project I'm sure you're eventually going to find some exploit attack vector. Especially if the focus from what I believe for GPU/CUDA until recently wasn't mostly focused on security rather than performance and those are always trade-offs.
transpute
> After briefly skimming through their talk they mention they're a third party consultancy company making recommendations on safety for NVIDIA, so "Nvidia Security Team" take it how you will.
This is a marketing case study by the major Ada tools and training vendor, on years of engineering work done by their customer, Nvidia. It includes quotes from Nvidia employees and their blog references a Defcon video by the head of the Nvidia Offensive Security team, https://www.adacore.com/uploads/techPapers/222559-adacore-nv...
Principal Software Engineer
Manager, GPU Firmware Security
Senior Manager, GPU Software Security
VP, Software Security
At the conclusion of their initial POC at the end of 2018, NVIDIA had five developers trained in SPARK. By the second quarter of 2022, that number had grown to over fifty.. Several NVIDIA teams are now using SPARK for a wide range of applications that include image authentication and integrity checks for the overall GPU firmware image, BootROM and secure monitor firmware, and formally verified components of an isolation kernel for an embedded operating system.
touisteur
The focus of their work seems to be low-level firmware, and not CUDA or any kind of compute. It makes sense a lot there because you won't need as many libraries.
The syntax is really a red-herring especially when you get contracts, proof of absence of runtime errors and higher-lever functional proof, mostly automated, or assisted with annotations. If you're actually going with this effort, the lack of curly braces, ampersands shouldn't be a main concern.
indolering
Ada's refusal to make their language any more approachable is a problem.
yjftsjthsd-h
What's unapproachable about it? And if your answer involves syntax, how do you explain Python?
0xbadcafebee
I forget sometimes that the papers and talks issued by teams at most companies are for PR and resume-padding. They hype the findings that justify that team's budget, and bury any negatives.
The other funny thing I noticed is the formal verification just means the program implements the standard - it doesn't mean the standard doesn't have security holes! And sometimes the security holes are implementation-specific, like timing attacks, which can require weird low-level tricks to solve.
edelbitter
>bury any negatives
I was looking for the x86-specific rant, and did not find it. You'd think that team would have had something to say about architecture complexity.
transpute
Nvidia’s GPU root of trust, where they are using SPARK, is based on RISC-V, they have a talk about that choice, https://news.ycombinator.com/item?id=43045952
kqr
> moving towards a completely different language always bug me, especially when the syntax is also completely different.
Why do syntax differences bug you? I could understand most concerns about switching ecosystems, but surely the difference between e.g. C++ and Java and JavaScript are orders of magnitude larger than that between C++ and Ada. Syntax is the smallest piece of it all, as far as I'm concerned.
> Then there's like everything libraries wise and CUDA is all C/C++ if I'm not mistaken.
Ada has excellent C interop.
zifpanachr23
Maybe it's naive of me but I also don't really perceive much of a security imperative for NVIDIA. They make graphics cards and slop generator cards for the most part. What exactly is the threat model here that requires switching to prioritizing memory safety? Are there a lot of graphics card related codes that are being exploited in the wild?
transpute
NVIDIA GPU RISC-V root of trust is analogous to Apple T2 secure enclave, AMD PSP, or Intel ME, which all perform security-critical functions.
> What exactly is the threat model here
It probably varies by product, but one commercial possibility is protection of price premiums, e.g. enforce feature segmentation for different products or customers, while using common silicon. NVIDIA operating margin reached 50%, unusually high for a hardware company, https://www.macrotrends.net/stocks/charts/NVDA/nvidia/operat.... AMD margin is below 20%.
2021, https://www.youtube.com/watch?v=l7i1kfHvWNI
2024, https://static.sched.com/hosted_files/riscvsummit2024/fe/Key...
~1 Billion RISC-V cores shipping in 2024 NVIDIA chips
Unified embedded HW and SW across all NVIDIA products
• Eliminates replication in basic primitives (isolation, crypto etc.)
• Maximizes SW/HW leverage across NVIDIA
Configuration: pay only for what is needed
Custom extensions: additional functionality, security, and performance
Our HW and SW architecture enable differentiation
There are upcoming open hardware/firmware RoT building blocks like OpenTitan (RISC-V), OCP Caliptra and TockOS (Rust) that could be used by competing device and platform vendors.> don't really perceive much of a security imperative for NVIDIA
When countries start budgeting hundreds of billions of dollars for national investment in LLM-based AI based on GPUs, they may introduce new security requirements for the underlying infrastructure.
tonyarkles
Yeah, there's a whole bunch of places where NVIDIA kit is used outside of "graphics cards and slop generators". I work on a drone system based around the Orin AGX and we definitely have "leverage the root of trust functionality to lock down the code that can run on the Orin" on the roadmap before we ever let the hardware end up directly in customers' hands. Root of trust -> Signed Bootloader -> Signed Kernel -> Signed Binaries. There's too much IP that we've spent too much money on to risk having end-users getting access to the raw TensorRT models involved.
guipsp
Are you aware of how the nintendo switch got piracy?
Joel_Mckay
In general, NVIDIA never had proper bug-free support in C for well over a decade (hidden de-allocation errors etc.), and essentially everyone focused on the cuda compiler with the C++ API.
To be honest, it still bothers me an awful GPU mailbox design is still the cutting-edge tech for modern computing. GPU rootkits are already a thing... Best of luck =3
MrLeap
GPU rootkits are sounds like misnomer unless they start getting rewritable persistent storage (maybe they do now and my knowledge is out of date).
If you've got malicious code in your GPU, shut it off wait a few seconds, turn it back on.
Actually looking at the definition, my understanding might be off or the definition has morphed over time. I used to think it wasn't a rootkit unless it survived reinstalling the OS.
Joel_Mckay
These have direct access to the dma channel of your storage device, and POC have proven mmu/CPU bypass is feasible.
My point was the current architecture is a kludge built on a kludge... =3
einpoklum
> with the C++ API
The funny thing is that the "C++ API" is almost entirely C-like, foregoing almost everything beneficial and convenient about C++, while at the same time not being properly limited to C.
(which is why I wrote this: https://github.com/eyalroz/cuda-api-wrappers/ )
> an awful GPU mailbox design is still the cutting-edge tech
Can you elaborate on what you mean by a "mailbox design"?
pjmlp
Depends on which CUDA API one is looking to,
Joel_Mckay
In general, a modern GPU must copy its workload into/out-of its own working area in vram regardless of the compute capability number, and thus is constrained by the same clock-domain-crossing performance bottleneck many times per transfer.
At least the C++ part of the systems were functional enough to build the current house of cards. Best of luck =3
bsder
Yeah, when I see the equivalent to slang written in Ada/Spark, I'll actually believe them.
kortilla
Yeah, it completely loses all weight when it’s an external consultancy suggesting it.
It’s easy to say “rewrite everything in X because Y” when you don’t have to deal with the burden of figuring out how to do that.
Flagged the article because the title is super misleading.
transpute
> Flagged the article because the title is super misleading
From the case study, the question was posed by Nvidia employees, before finding Ada/SPARK and Adacore, https://news.ycombinator.com/item?id=43043381
NVIDIA began to ask themselves, “Can we change our fundamental approach? And if so, which of these tools can we actually use?” They put these questions to their software security team. In reply, the security team came back with what Daniel Rohrer characterized as “a fairly heretical” proposition: “What if we just stopped using C?” This led to other questions, like, “What alternative languages and tools are available to support the use of formal methods?” In trying to answer those questions, NVIDIA discovered SPARK.
white-flame
The real problem is deeper than this. The actual question to ask is:
"What if we just stopped distributing and blindly executing untrusted binary blobs?"
A trusted compiler in the OS, and some set of intermediate representations for code distribution would solve a massive amount of security issues, increase compatibility, and allow for future performance increases and disallowing suspect code patterns (spectre, rowhammer, etc). Specializing programs at install time for the local hardware makes way more sense than being locked into hardware machine code compatibility.
tsimionescu
That does nothing to fix the vast majority of security issues, which are caused by trusted but not memory safe programs running on untrusted input.
It's also an extremely unrealistic goal. First of all, you run into a massive problem with companies and copyright. Second of all, it will be very hard to convince users that it's normal for their Chrome installation to take half an hour or more while using their CPU at 100% the whole time.
bawolff
I feel like you might as well ask "why not world peace"?
There are a huge number of practical issues to be solved to make that be viable.
grayhatter
could you list a few of the problems you predict?
Taikonerd
The VST Lab at Princeton works on this sort of problem: https://vst.cs.princeton.edu/
"The Verified Software Toolchain project assures with machine-checked proofs that the assertions claimed at the top of the toolchain really hold in the machine-language program, running in the operating-system context."
Some of the same researchers worked on TAL (typed assembly language), which sounds like it could be one of the "intermediate representations" you mentioned.
transpute
For a while, Apple required apps to be submitted as bitcode (LLVM IR) to the App Store, where they would be converted to x86 or Arm machine code during device install. They stopped that a couple of years ago, after migration to Apple Silicon.
refulgentis
Apple used to require bitcode (LLVM IR) for App Store submissions.
Rest is interesting, nothing was done on install, it wasn't converted or anything to machine code.
In fact, in practice, it never ended up being used.
Well, that's not particularly relevant: the idea was never to do something on device anyway.
Really excellent post here summarizing that I can vouch for: https://stackoverflow.com/questions/72543728/xcode-14-deprec...
transpute
Thanks for the correction and link. Relevant to the comment above about binary blobs:
> The idea to have apps in a CPU neutral form available on the app store is not a bad one; that's why Android chose Java Byte Code (JBC). Yet JBC is a pretty stable representation that is well documented and understood, Bitcode isn't. Also on Android the device itself transforms JBC to CPU code (AOT nowadays).
arbitrandomuser
How did that work though ? Isn't bitcode tied to a target triplet ?
saagarjha
It didn’t. Bitcode was only used for watchOS compatibility.
saagarjha
Bitcode was never required for app submissions on macOS. It also was never portable across architectures.
watt
There is a rule that if somebody poses this hypothetical with the word "just" in it, they have signed themselves up to go and implement it.
So, congratulations, take it and run with it.
pabs3
> "What if we just stopped distributing and blindly executing untrusted binary blobs?"
You can do that right now with open source software and Bootstrappable Builds.
https://bootstrappable.org/ https://lwn.net/Articles/983340/
null
LeFantome
Pretty huge promotional opportunity for AdaCore. You can tell because this was written by AdaCore.
Still, I am a bit convinced. The NVIDIA name is increase the chance that I will evaluate SPARK sometime soon. I have to admit, other than knowing it exists, I am not super familiar with it. What I have seen before now has felt like them using the popularity of Rust as a platform for promotion. This, at least, appears to be a use case where they came in on their own merits.
transpute
> huge promotional opportunity
Right? Yet they published this customer case study in 2022 and it never reached HN. Found in 2025 search on GPU firmware security.
PaulHoule
Efficiency of Ada should be pretty close to C, but that strange thing Ada does where you define your own numeric types like
type Day_type is range 1 .. 31;
creeps me out a little, it makes me think I have to throw Hacker's Delight [1] in the trash if I want to use it but I could be wrong. It makes me think of the the deep history of computing, where there were a few straggler architectures (like the PDP-10 [2]) that didn't use the 8-bit byte were around and when Knuth wrote a set of programming books based on an instruction set that wasn't necessarily binary.(Lately I was thinking about making a fantasy computer that could be hosted in Javascript which was going to be RISC but otherwise totally over the top, like there would be bitwise addressing like the PDP-10. First I wanted it to be 24 bit but then I figured I could pack 48 bits in a double so I might as well. It even would have a special instruction for unpacking UTF-8 characters and a video system intended for mixing latin and CJK characters. Still boils down to an 8-bit byte but like the PDP-10 it could cut out 11-bit slices or whatever you want. I was going to say screw C but then I figured out you could compile C for it)
Tadpole9181
Love that feature, along with derived types[1] and subtype predicates[2]. There's a special place in my heart partitioned for things that won't let you screw things up.
[1] https://learn.adacore.com/courses/intro-to-ada/chapters/stro...
[2] https://learn.adacore.com/courses/Ada_For_The_CPP_Java_Devel...
yjftsjthsd-h
Alright, this is Hacker news, so I'm gonna nitpick...
> It even would have a special instruction for unpacking UTF-8 characters
From https://en.wikipedia.org/wiki/Reduced_instruction_set_comput... :
> The term "reduced" in that phrase was intended to describe the fact that the amount of work any single instruction accomplishes is reduced—at most a single data memory cycle—compared to the "complex instructions" of CISC CPUs that may require dozens of data memory cycles in order to execute a single instruction.
I don't think what you're describing is a RISC processor at all.
kbolino
Whether accelerated UTF-8 decoding breaks RISC-ness is an interesting question, and not one with an obvious answer IMO.
Supposing we loaded 4 bytes into a register with a load instruction (quite RISC), we could then have a "decode UTF-8" instruction which could set two output registers: one receiving the decoded code point, and the other receiving the number of bytes consumed (1-4). That's another perfectly RISCy operation: read one input register, operate on it, update two output registers. Most RISC architectures also allow base+offset addressing at least, so you can chain this to another load using the output of that second register; worst case, you'd need to add base+offset in a dedicated instruction. No violations of RISC here.
However, you'd start running into problems with alignment. Loading 4 bytes into a register typically requires 4-byte alignment, but UTF-8 is a variable-length encoding (hence the desire to accelerate it in the first place!). Is unaligned load RISCy? Many architectures now support it, but they usually didn't start off with it. Then again, 64-bit architectures can just ignore the problem entirely, since they can load 8 bytes at a time, which will always be enough to fit an arbitrary 4-byte value at any alignment. You'd just need to shift the value in the register by the amount of the misalignment, which is another valid RISC operation.
If you wanted to sidestep the alignment issue, then an alternate solution would be to decode the UTF-8 sequence straight from memory, but that definitely feels more like CISC to me.
null
taurknaut
This doesn't strike me as that different than an enum.
MathMonkeyMan
Arithmetic is defined on range types. So day 30 + day 4 is 34, which if you then try to coerce into a day will throw an exception.
dragonwriter
And... So? It seems you are saying Day_type + Integer is Integer, and Integer (in general) cannot safely be coerced into Day_type and... that’s logically correct?
nottorp
Not only Ada?
Pascal is less esoteric and has had that forever.
PaulHoule
My feelings about Pascal are pretty mixed.
Pedagogically oriented computer science profs circa 1980 were aghast that BASIC had become the standard for teaching young people to program. It was difficult to fit Pascal into a microcomputer then, so we were stuck with the atrocious UCSD Pascal [1] which used a virtual machine to make up for the weak instruction sets of many micros, particularly the 6502. Since the compiler ran inside the VM, compiling a small Pascal program was like compiling a large C++ program today.
Not long after that I got to use Pascal on a VAX which was not so bad, but pretty obviously not up to doing systems programming, particularly compared to C, which was starting to show up on micros such as Z-80s running CP/M and the TRS-80 Color Computer running OS-9.
Then I got a 80286 computer and I really liked Turbo Pascal because it added the (non-standard) facilities you need to do systems work, but went back to C when I went to school because it was portable to the 68k and SPARC based Sun machines we had.
nottorp
> compared to C, which was starting to show up on micros such as Z-80s running CP/M
I had some very limited access to CP/M z80 machines. Probably somewhere between 86-90. All I remember about the C compiler they had is that I had to swap 2 or 3 floppies to produce a linked executable, so it sounds similar to this UCSD Pascal.
My first real contact with Pascal was Turbo Pascal, and that ran loops around any other compiler I had access to back then...
dang
Discussed at the time:
Nvidia Security Team: “What if we just stopped using C?” - https://news.ycombinator.com/item?id=33504206 - Nov 2022 (362 comments)
8organicbits
It's hard to tell what components are now written in spark. The driver is mostly c, and none of the open source repos are identified as spark.
transpute
Their PDF has a bit more detail, https://news.ycombinator.com/item?id=43043541
chadcmulligan
Any Spark compiler? I've found https://www.adacore.com/sparkpro which says request pricing - never good, any open ones suitable for playing with?
ajxs
There's a GPL version of GNATprove, which you can install with Alire (https://alire.ada.dev). That's the version I use.
MaxBarraclough
You wouldn't guess it from AdaCore's homepage, but SPARK tooling (including provers) and the GNAT Ada compiler are both made freely available, with the goal of attracting interest from FOSS projects.
Here's a page on this: https://blog.adacore.com/a-new-era-for-ada-spark-open-source...
ajxs is right to say that the way to get your hands on this tooling is to use the Alire package-manager. AdaCore no longer offer alternative means of distribution (except, presumably, to paying customers).
cdevries
"Beware of bugs in the above code; I have only proved it correct, not tried it." - Donald Knuth
tombert
I've never used SPARK, but the Wikipedia page makes it seem pretty interesting.
For that matter, Ada seems interesting, I kind of wish it had caught on outside of the defense industry.
numeromancer
It failed to catch on outside the defense industry because that is where is started. It was long sneered at as a "language designed by committee".
In one programming class, in college, we used a text-book called "Oh my, Modula 3!". On the back they explained the title, saying "better 'Oh my, Modula 3!' than 'Oh no, Ada!'".
timbit42
You mean, "Oh My! Modula-2!", not 3.
The title of that book was chosen as a reference to a previous book titled, "Oh! Pascal!".
tombert
Yeah, people have explained to me here before.
It's just too bad, because Ada looks like a language that should have gotten more popular. I was playing with some of the concurrency constructs built into it, and it was fairly pleasant, and it makes me think about the "what if?" universe where it caught on instead of C++ dominating the 90's and 2000's.
They used SPARK instead:
https://en.wikipedia.org/wiki/SPARK_(programming_language)