159 points by Oxynux 124 days ago
Basically LLVM is now a dependency of equal importance to GCC for Debian.
Hopefully this will help motivate expanding architecture-support for LLVM, and by proxy Rust.
> Basically LLVM is now a dependency of equal importance to GCC for Debian.
Yes, exactly. And until then, every time a new dependency on LLVM (or by extension on Rust) appears, users of architectures LLVM doesn't target complain bitterly and threaten to fork the last version without that dependency. (This argument has happened for Firefox, rsvg, and recently bzip2.)
It's been a fairly important thing for debian for a while, but this probably solidifies it even more. https://clang.debian.net/
Godbolt shows that Clang is a lot more aggressive with use of "cmov" -- conditional move instructions.
When the Spectre and Meltdown family of security debacles surfaced, cmov was promoted as a way to choke off speculative execution. But that suggests, also, that cmov would be bad for performance in places where cross-process security isn't a concern.
Did something important change, or did I misunderstand? Maybe only cmov memory -> register blocks speculation?
CMOV replaces branches. So no branch, no speculation. Typically, though, only branches that the compiler identifies as not reliably predictable would be replaced. Because with a well predicted branch you do not waste resources. CMOV executes both paths, wasting the work of one path, but saving the mispredict penalty. Which for “flakey” branches can be a overall win.
Choice of cmov vs branch is pretty tricky because profitability depends on branch probabilities, which are commonly not known. LLVM has an X86 cmov conversion pass that does this based on some cost modeling for loop-carried dependencies, but it does not always make the right choice. Common problem is binary search code where you really, really want that cmov, but might not get it.
I'm surprised at how m68k was rescued from oblivion in 2013... Who is using it?
I suspected that it had something to do with ColdFire. It seems that was partly true, but another interesting factor was apparently ARAnyM (an Atari 68k ST/TT/Falcon emulator) which can conveniently run a Linux system: https://grep.be/blog/en/computer/debian/m68k/arrakis/
There's also the Apollo core used in the Vampire accelerator, that's much faster than anything else m68k, and even faster than ColdFire.
Yes, but that doesn't explain the m68k porting effort of 2013. The Apollo core doesn't have an MMU so it won't run Linux.
I thought m68k/nommu exists and is used?
That's true and I didn't know of that. That said, the Amiga configuration depends on an MMU. I don't know how hard it would be to retarget it for an MMU-less system.
Wait, even a 68040 embedded the MMU on die. The Apollo core doesn't have a MMU?
I frequently see it advertised as not having an MMU, but it turns out that it does...it's just that it's a new design that isn't Motorola compatible.
On Amiga, where programs typically all live in the same shared memory space, this is not a great concern for a regular user. It could be useful for developers and some specialist applications, and of course for more modern OS models.
Apollo-Core could certainly use better documentation.
I own a V500v2+ and it's an awesome board, but this kind of thing drives me nuts.
It has other silly limitations, such as the kickstart being embedded in the fpga core, which is not open source. There's no way for an user to boot to a custom kickstart; At most, it's possible to softkick one after first boot.
I used to run Debian/68k under aranym... Thanks Pelagatti
I think Thorsten Glaser resurrected the port because he wanted to test his shell (the MirBSD Korn shell) on m68k. Many other people also helped of course.
Amiga users (like me), Atari users, x68000 users and mac classic users, among others.
Some of these are using netbsd/m68k (like me), with some overlap.
Are you using mainly Debian on the command line on m68k, or actually running a GUI?
Currently away from my A1200 so I haven't touched it in ~3yr.
I'm hoping to move my A1200 to my current location soon and install current versions. The A500+ and A600 I have with me do not have a capable (MMU) CPU.
The last time I had cli-only on Debian (couldn't get X to work), and had X on netbsd.
No GPU, just AGA. Any GPU would make X much easier AIUI.
They say Rust support is blocked by having no RISC-V backed for LLVM, and no Rust Gcc front-end. But I thought there was a working Gcc front-end, just without the borrow checker, which is superfluous for code generation. Maybe it is for an old version of the language, and can't build the current library?
AFAIK there's no gcc port underway. There is another Rust compiler but it's not a complete alternative to Rustc, in the short run it was mainly intended to be a way to bootstrap a Rust compiler without needing to go all the way back to the original ocaml implementation of the first Rust compiler. As such, it is only known to output correct binary when running on the code of the Rust 1.19 compiler. This was a really successful project, but it's not something you can use to replace to compile your own arbitrary rust code.
Right, one can use mrustc to compile 1.19, but then they would need to progress through 1.XX -> current. A container chain for bootstrapping the Rust compiler would be a good idea.
Debian Rust maintainer here.
We don't need to bootstrap from scratch, Rust (via LLVM) can cross-compile very easily once riscv64 support is added.
In practise I am already cross-compiling amd64->mips/mipsel for every stable Rust release on my home box, because a native mips rustc compile runs out of memory on those 32-bit machines.  The cross-compiled mips works completely fine to build smaller rust packages, e.g cargo  ripgrep 
Yeah back with 1.14 or so I set up some cross (and native) toolchains for mipsel with the intent of putting some stuff on my ER-X. While it was a giant pain in the dick with the unsupported version of Debian Ubiquiti was using, it was doable. Cross compiling on Debian has come a long way (altho I'm using crosstool-ng on OSX these days which is fairly slick).
This is probably what you mean:
It's still a work in progress, with only x86 even supported. It's not a GCC frontend, it generates C, and is based on an old version of rust (1.19). Right now I think it's only useful to bootstrap rust, but that doesn't help you get a backend for another architecture.
Why not? Just bootstrap but on another architecture?
Just because you may have a functional compiler running on a another architecture, doesn't mean the compiler actually outputs code FOR that architecture. The compiler would still only support the architectures it did when you bootstrapped it, which in that case would not include that specific architecture.
The Rust compiler is a cross-compiler by default, so the usual way that you bootstrap on a new platform is to add it to the existing compiler, and then cross-compile to the new architecture. Which is what I read your parent as suggesting.
The discussion is not about bootstrapping. Bootstrapping using mrustc on riscv64 would get you a working rust 1.19 compiler that runs on riscv64 and knows how to emit x86 code. It wouldn't even allow you to go up to the latest version. That's not very useful, Debian already has a rust compiler that emits x86, it needs a rust compiler that emits riscv64.
Right. They could add the riscv64 backend to the current compiler, and then cross-compile it from x86_64, meaning that the bootstrap chain for riscv64 would start at 1.37.0. No mrustc even needed.
That said, I feel like we may be talking past each other... what I'm saying is, to get support for riscv64, you don't need to do a full new bootstrap. Maybe I'm misunderstanding the point of the thread.
That is the usual case, but some people don't want to download any binary blob that's not a C compiler.
The lack of LLVM backend surprises me. How much work is it to add a backend with 60 instructions (and few addressing modes)? It's clearly far more than I would have guessed.
LLVM has an experimental Riscv backend. Adding it is easy. QAing it to the point that is can be considered non experimental is a lot of work.
Nightly Rust already has basic RISC-V support, but the problem is Rust's libc binding is not yet ported to RISC-V. Without libc, Rust's std will not be usable, that's the main blocker here. While the theory is you don't require std to compile a Rust program, most, if not all, Rust programs leverage std in some extent.
I am waiting until the Bitmanip extension lands to get excited about RISC-V:
Just can't live without that popcount.
What do you need popcount for?
yeah but 99% of the debian apps don't need popcount
Good to see it so far along!
There's some work to be done in rust on portability - it's been a problem in pkgsrc too.
Looking forward to a standardized MMU interface and page table format.
What's the precise problem? As far as I know the privspec has standardized enough in this area.
In any case, as one of the maintainers of the Fedora/RISC-V port (we also work closely with Debian) we're relaxed about kernel changes, because those are simple to make. (The big problems are changes in userspace code and glibc)
Any "Pi'esc" boards available yet? I would love to run Buster on a RISC-V board.
Seeedstudio has an "Arduino'esc" RISC-V board for around USD 30 which looks very interesting. 
This board  also looks very interesting but is around USD 1000 and I can't find any comparisons on performance to a "regular" PC. I know these things are like comparing apples to oranges but I would like to know how for example a browser performs loading a webpage in comparison with another CPU.
There also appears to be an expansion board with PCI so you can add a GPU for USD 2000. 
Just informational: the esc sound meaning "-like" is rendered as esque. We "borrowed" it from the French, and we're not giving it back.
Similarly to "isch" from German, which we filed the serial numbers off of, and respelled "ish". For some reason we are squeamish about doing that to French.
Perhaps ironically, we took squeamish from French escoymous, but substituted "our" suffix.
"From Middle English -ish, -isch, from Old English -isċ (“-ish”, suffix), from Proto-Germanic -iskaz (“-ish”), from Proto-Indo-European -iskos."
Yeah, it's a little misleading to say that English borrowed that ending from German, English is a Germanic language and always had it.
Ok, and "-ish" and "-esque" clearly both trace back to the Proto-indo-european. So we're really just talking about spelling standards, which are a recent innovation.
The Unleashed board has a performance similar to a Raspberry Pi 3 in cpu terms but it's biggest bottleneck is that the SD card is running on a SPI bus where it gets something like 2MB/s. This is due to having the complete SOC as opensource and the SD card association not allowing open source IP for it's interface.
I think there's some degree of compatibility between SD-card and MMC-card standards, and that the latter are comparatively more open. Surely that could solve the open-IP issue?
Last time I looked both MMC and SD were JEDEC standards available under the same terms?
Although it won't solve the general performance problems (it's a slow rocketchip impl) you should be using NBD root. That's what we use for the Fedora/RISC-V builders.
It does have gigabit ethernet and there's also an expansion board with SATA and PCIe but that's another $2000 because it's using an enormous FPGA.
Whoa, I had considered the unleashed board to play around with and bookmarked it, but they are not very overt with the price. I didn't realize it was $1000 USD. You have to click through twice to discover it.
You might consider the HiFive board. It's around 50 dollar, a more appropriate price for something to play around with.
The chips used in the hifive board are not available for sale anywhere. They're apparently a one-off run for the hifive board specifically.
Therefore, they're not a good choice to base a project on.
I do look forward at low power microcontrollers based on risc-v with some actual commitment to continued availability.
The hifive is marketed as an arduino-like, but at 320mHz it seems like it should be able to run a stripped-down Linux distro. Not fast, but if Linux can run on puny MIPS boards...
It doesn't appear to have any DRAM, so you'd have to find a linux capable of running in 16 (sixteen) kilobytes of onchip SRAM. Without an MMU.
You can run Linux on at Atmega664. Not well, but you can.
Yeah would love to have a RISC-V board with similar performance as a Pi and that isn't as expensive as the hifive unleashed. Haven't found anything yet.
The likely source will be lowrisc.org. Several folks that were involved with the Rpi are on the project.
"We will produce a SoC design to populate a low-cost community development board and to act as an ideal starting point for derivative open-source and commercial designs." - https://www.lowrisc.org/docs/untether-v0.2/
I wouldn't hold your breath (although I wish them luck). There are however a bunch of low cost high performance RV64 chips coming this year and next from China that will change things dramatically. (I'm a Fedora/RISC-V maintainer, and I'm in Beijing today and tomorrow talking to RISC-V people. It's 23:57 here, got to go to bed :-/ )
Do you know what companies are making them? I'd love to take a closer look as I've also been looking for a risc-v rpi alternative.
As is always the way with these things, unfortunately I'm under an NDA until the products are released. However I can tell you the rather obvious thing: We're helping them to understand how to get changes they need integrated into upstream communities (kernel, GCC, binutils, uboot in particular) so that their hardware will boot without patching when or soon after it is released.
The cheapest are indeed the devices based on the sipeed chip.
If that's insufficient, your best bet is to get a risc-v capable FPGA.