Big Book of R
114 comments
·April 10, 2025cye131
aquafox
Why not mix R and Python in interactive analysis workflows: 1) Download positron: https://github.com/posit-dev/positron 2) Set up a quarto (.qmd) notebook 3) Set up R and Python code chunks in tour quarto document 4a) Use reticulate to spawn a Python session inside R and exchange objects beween both languages (https://github.com/posit-dev/positron/pull/4603) 4b) Write a few helper functions that pass objects between R and Python by reading/writing a temporary file.
dkga
This is exactly what I do for the vast majority of my academic papers. It combines the power and flexibility of R for statistics, which I agree with the upstream poster is incredibly underrated (especially with tidyverse) with python.
p00dles
Is this what tools like Nextflow or Snakemake aim to do? I don't know, and I'm genuinely curious, because I'm starting to work in bioinformatics and doing different parts of an analysis pipeline in R and Python seems common, and, necessary really if you want to use certain packages.
I'm wondering if I should devote time to learning Nextflow/Snakemake, or whether the solution that you outlined is "sufficient" (I say "sufficient" in quotes because of course, depends on the use case).
b-rodrigues
I'm writing a package called rixpress that leverages Nix to build reproducible pipelines with targets in either R or Python
Here's the github to the package https://github.com/b-rodrigues/rixpress/tree/master
and here's an example pipeline https://github.com/b-rodrigues/rixpress_demos/tree/master/py...
goosedragons
Org mode in Emacs is even better at this IMO. Only downside is that no guarantee other people use Emacs too.
Annatar
[dead]
vishnugupta
As someone who is learning probability and statistics for recreation, I wholeheartedly agree. I wish I had come across R and dplyr/tidyverse/ggplot2 back in college while learning probability and stats. They were quite boring and drudgery to study because I wasn't aware of R to play around with data.
Well, better late than never I guess.
gnuly
R was the first thing we had in our syllabus for (shallow)Machine Learning.
the ease of doing `model <- lm(speed~dist, cars)` and then `predict(model, data.frame(dist = c(42)))` is unparalled.
kasperset
I love R and dplyr. It is very readable and easy to explain to non-programmers. I use it almost everyday. Not exactly on the topic,I am having difficulties debugging it. May be I need to brush up on debugging R. Not sure if there is a easy way to add breakpoint when using vscode.
itsmevictor
Have you checked this extension? https://marketplace.visualstudio.com/items?itemName=RDebugge...
JackeJR
browser() ?
disgruntledphd2
trace subsumes browser, it's much more flexible and can be applied to library code without editing it.
joshdavham
Totally agreed that R is underrated. I'm sad that I stopped using it after graduation.
fithisux
Life saver. I do not use the raw dataframe API, inconsistent and error prone.
wwweston
what’s the story integrating R code into larger software systems (say, a saas product)?
I’m sure part of Python’s success is sheer mindshare momentum from being a common computing denominator, but I’d guess the integration story is part of the margins. Your back end may well already be in python or have interop, reducing stack investment and systems tax.
vhhn
There are so many options to emded R in any kind of system. Thanks to the C API, there are connectors for any if the traditional language. There is also RServe and plumber for inter-process interaction. Managing dependencies is also super easy.
My employer is using R to crunch numbers enbeded in a large system based on microservices.
The only thing to keep in mind is that most people writing R are not programmers by trade so it is good to have one person on the project who can refactor their code from time to time.
dajtxx
I am working on a system at present where the data scientist has done the calculations in an R script. We agreed upon an input data.frame and an output csv as our 'interface'.
I added the SQL query to the top of the R script to generate the input data.frame and my Python code reads the output CSV to do subsequent processing and storage into Django models.
I use a subprocess running Rscript to run the script.
It's not elegant but it is simple. This part of the system only has to run daily so efficiency isn't a big deal.
shoemakersteve
Any reason you're using CSV instead of parquet?
wodenokoto
It's getting a lot better, but R in production was something companies 10 years ago would say "so we figured out a way".
The problem is pinning dependencies. So while an R analysis written using base R 20 or 30 years ago works fine, something using dplyr is probably really difficult to get up and running.
At my old work we took a copy of CRAN when we started a new project and added dependencies from then.
So instead of asking for dplyr version x.y, as you'd do ... anywhere, we added dplyr as it and its dependencies where stored on CRAN on this specific date.
We also did a lot of systems programming in R, which I thought of as weird, but for the exact same reason as you are saying for Python.
But R is really easy to install, so I don't see why you can't setup a step in your pipeline that does R - or even both R and Python. They can read dataframes from eachothers memory.
mrbananagrabber
renv and rocker have really addressed these issues for using R in production
kerkeslager
This is, I think, the main reason R has lost a lot of market share to Pandas. As far as I know, there's no way to write even a rudimentary web interface (for example) in R, and if there is, I think the language doesn't suit the task very well. Pandas might be less ergonomic for statistical tasks, but when you want to do anything with the statistical results, you've got the entire Python ecosystem at your fingertips. I'd love to see some way of embedding R in Python (or some other language).
notagoodidea
There is a lot of way and the most common is shiny (https://shiny.posit.co/) but with a biais towards data app. Not having a Django-like or others web stack python may have talks more about the users of R than the language per se. Its background was to replace S which was a proprietary statistics language not to enter competition with Perl used in CGI and early web. R is very powerful and is Lisp in disguise coupled with the same infrastructure that let you use C under the hood like python for most libraries/packages.
djhn
Plumber is a mature package for building an api in R.
For capital P Production use I would still rewrite it in rust (polars) or go (stats). But that’s only if it’s essential to either achieve high throughput with concurrency or measure performance in nanoseconds vs microseconds.
thangalin
Tangentially, R can help produce living Markdown documents (.Rmd files). A couple of ways include pandoc with knitr[0] or my FOSS text editor, KeenWrite[1]. I've kept the R syntax in KeenWrite compatible with knitr. Living documents as part of a build process can produce PDFs that are always up-to-date with respect to external data sources[2], which includes source code.
[2]: https://youtu.be/XSbTF3E5p7Q?list=PLB-WIt1cZYLm1MMx2FBG9KWzP...
haberman
There is also Quarto, which I have had a good experience with: https://quarto.org/
countrymile
R is beautiful for writing data rich books and websites. I started with rmarkdown but believe that most of the new developments are now in quarto?
malshe
Yes, that's correct. Quarto is language agnostic and Posit has chosen that route over just being an R shop.
shepherdjerred
I'm more excited about https://typst.app/
Onawa
Quarto can output to Typst (as well as many other outputs simultaneously, e.g. .docx, HTML, PDF, PPT, etc) for it's typesetting capabilities. https://quarto.org/docs/output-formats/typst.html
juujian
Last time I was working on something complex, I was able to knit from Rmd to md, and then use my usual pandoc defaults, which was quite neat. Big recommendation on that workflow.
thangalin
My typesetting Markdown series explores weaving knitr and pandoc together:
https://dave.autonoma.ca/blog/2019/07/11/typesetting-markdow...
However, most workflows and nearly all editors don't support interpolated variables. To address this, first I developed a YAML preprocessor:
https://repo.autonoma.ca/yamlp.git
Then I grew tired of editing YAML files, piping files together, and maintaining bash scripts. So next, I developed KeenWrite to allow use of interpolated variables directly within documents from a single program. The screenshots show how it works:
uptownfunk
I will say, now after 15 years messing with this. With LLM I just do it all in Python. But, I still miss the elegance and simplicity of R for data manipulation and analysis. Especially the dplyr semantics. They really nailed it. I think they got crushed by the namespace / import system. There’s something about R that makes you so fluid and intuitive. But the engineering, the efficiency, I get with Python now, I can’t go back.
tylermw
Funny you mention namespacing: R 4.5.0 was just released today with the new `use()` function, which allows you import just what you need instead of clobbering your global namespace, equivalent to python’s `from x import y` syntax.
e.g. avoid dplyr overriding base::filter
use(“dplyr”, c(“mutate”, “summarize”))
kgwgk
The release notes say:
(Actually already available since R 4.4.0.)
dkga
I agree with all your comment… except the very last bit. Do you really find python to be more efficient at engineering stuff than R? And especially speed, which in my experience at least is broadly the same if not faster with R because it interages easier with Rust and C++?
claytonjy
Not OP, but i think python is very far above R for engineering stuff. I built my early career on R and ran R user groups. R is great for one-off analyses, or low-volume controlled repetition like running the same report with new inputs.
For engineering stuff i want strong static analysis (type hints, pydantic, mypy), observability (logfire, structlog), and support (can i upload a package to my cloud package registry?).
For ML stuff, i want the libraries everyone else uses (pytorch, huggingface) because popularity brings a lot of development and documentation and obscure github issues the R clones lack.
Userbase matters. In R, hardly any users are doing any engineering; most R code only needs to run successfully one time. The ecosystem reflects that. The python-based ML world has the same problem, but the broader sea of python engineers helps counterbalance.
uptownfunk
On further reflection I think the sweet spot for R for me Has always been prototyping and exploration. Where you don’t exactly know what the logic needs to be, or how the data needs to be cut to get at what you want. So that rapid type of exploration R is really really good at. Closer to math for me than software engineering. And if I had a job where I could just do that all day I’d be pretty happy at this point in my life. and you can’t use a pivot table Google sheets or excel to get at the cut you want or the logic is too complex to do in Google sheets. So for that sweet spot, which is still a broad niche, R is excellent and shines.
uptownfunk
Everything I need can get done in python, so I don’t even need to deal with rust and cpp. Adding language interop between r and cpp is now just another thing on my plate, so just stick to Python and pay the cost of less elegant code for data manipulation which I am okay with because now I just need to read it and not write it.
There’s a ton more python code out there so the LLM reliability in python code just makes my life easier. R was great and still is, but my world is now more than just data eng, model fitting, and viz. I have to deal with operationalizing and working with people who aren’t just data science and most org don’t have the luxury of having an easy production R system so I can get my python code over the line and trust a good engineer will be okay smeshing that into the production stack which is likely heavy Python. (Instead of saying oh we don’t work with R we do Python Java so it will take 3-5x longer).
Another sad truth is the cool ml kids all want to do pytorch deep ML training / post training / rlhf / ppo / gdpr gtfo so you are not real hardcore ml if you only do R. I know it’s stupid but the world is kind of like that.
You want to hire people who want to build their careers on the cool stack. I know it’s not all the cool talk the hackers here play with but for real world application I have a lot of other considerations.
gsf_emergency_2
Any Julians comment?
Having seen Julia proposed as the nemesis of R (not python, that too political, non-lispy)
>the creator of the R programming language, Ross Ihaka, who provided benchmarks demonstrating that Lisp’s optional type declaration and machine-code compiler allow for code that is 380 times faster than R and 150 times faster than Python
(Would especially love an overview of the controversies in graphics/rendering)
Hasnep
In my opinion, Julia has the best alternative to dplyr in its Dataframes.jl package [1]. The syntax is slightly more verbose than dplyr because it's more explicit, but in exchange you get data transformations that you can leave for 6 months and when you come back you can read and understand very quickly. When I used R, if I hadn't commented a pipeline properly I would have to focus for a few minutes to understand it.
In terms of performance, DF.jl seems to outperform dplyr in benchmarks, but for day to day use I haven't noticed much difference since switching to Julia.
There are also APIs built on top of DF.jl, but I prefer using the functions directly. The most promising seems to be Tidier.jl [2] which is a recreation of the Tidyverse in Julia.
In Python, Pandas is still the leader, but its API is a mess. I think most data scientists haven't used R, and so they don't know what they're missing out on. There was the Redframes project [3] to give Pandas a dplyr-esque API which I liked, but it's not being actively developed. I hope Polars can keep making progress in replacing Pandas, but it's still not quite as good as dplyr or even DF.jl.
For plotting, Julia's time to first plot has got a lot better in recent versions, from memory it's something like 20 seconds a few years ago down to 3 seconds now. It'll never be as fast as matplotlib, but if you leave your terminal window open you only pay that price once.
I actually think the best thing to come out of Julia recently is AlgebraOfGraphics.jl [4]. To me it's genuinely the biggest improvement to plotting since ggplot which is a high bar. It takes the ggplot concept of layers applied with the + operator and turns it into an equation, where + adds a layer on top of another, and the * operator has the distributive property, so you can write an expression like data * (layer_1 + layer_2) to visualise the same data with two visualisations. It's very powerful, but because it re-uses concepts from maths that you're already familiar with, it doesn't take a lot of brain space compared to other packages I've used.
[1] https://dataframes.juliadata.org/ [2] https://github.com/TidierOrg/Tidier.jl [3] https://github.com/maxhumber/redframes [4] https://aog.makie.org/
staplung
Thanks for the links. FWIW, the link for 4 (aog) is currently 404'd, which is amusing because the site is still up. They just seem to have deleted their own top level index.html file. Anyway, this works:
CreRecombinase
The comment you linked is a response to my comment where I tried (and failed) to articulate the world in which R is situated. I finally "RTFA" and the benchmark I think perfectly deomonstrates why conversations about R tend not to be very productive. The benchmark is of a hypothetical "sum" function. In R, if you pass a vector of numbers to the sum function, it will call a C function sum. That's it. In R when you want to do lispy tricky metaprogramming stuff you do that in R, when you want stuff to go fast you write C/C++/Rust extensions. These extensions are easy to write in a really performant way because R objects are often thinly wrapped contiguous arrays. I think in other programming language communitues, the existence of library code written in another language is some kind of sign of failure. R programmers just do not see the world that way.
fithisux
Julia is what I mostly use. I used R in the past, but I was all the time puzzled from the documentation. It did not work for me. Sometimes I fire the REPL for some interpolation, but I limit myself to what I understand.
BTW I am a senior Java / Python developer
barrenko
For data analysis and visualization R is the lightsaber.
vharuck
I also like this fun though dated handbook, full of gotchas common among new R programmers:
fn-mote
Dated is right.
The invention of the Tidyverse freed new R programmers from 126 pages of gotchas.
Tell them to learn to use the tidyverse instead. For most of them, that will be all they ever need.
wpollock
Very nice, but instead of an owl, shouldn't the cover illustration be a pirate?
DadBase
Totally agree. R is pure pirate energy. Half the functions are hidden on purpose, the other half only work if you chant the right incantation while facing the CRAN mirror at dawn.
MrLeap
If you started with SAS for statistics like I did, you'd see how absolutely civilized R is in comparison.
kylebenzle
Yes but today I find little to no benefit over python
account-5
I've never used R before, why would functions be hidden on purpose? Sounds like a recipe for frustration.
wdkrnls
Computer scientists had this idea that some things should be public and some things private. Java takes this to the nth degree with it's public and private typing keywords. R just forces you to know the lib:::priv_fun versus lib::pub_fun trick. At best it's a signal for package end users to tell which functions they can rely on to have stable interfaces and which they can't. Unfortunately, with R's heavy use of generics it gets confusing for unwary users how developers work with the feature as some methods (e.g. different ways to summarize various kinds of standard data sets as you get with the summary generic or even the print generic) get exported and some don't with seemingly no rhyme or reason.
Hasnep
Don't worry they're just a bot. R doesn't hide functions.
hcarvalhoalves
YaRrr! The Pirate’s Guide to R
madcaptenor
Sadly, the R community has never really embraced the pirate thing.
esafak
Statisticians don't really embody the pirate spirit, do they :)
bryanrasmussen
The average Statistician doesn't, but the mean ones do.
oscarbaruffa
[dead]
madcaptenor
I've made some half-hearted attempts to build something like this and I'm glad to see someone tried harder than I did. Thanks!
One comment: it would be good to distinguish between books that are free and books that you have to pay for.
oscarbaruffa
[dead]
kingkongjaffa
What is the best way to integrate some R code with a python backend?
I’ve been tempted to port to python, but some of the stats libraries have no good counterparts, so, is there a ergonomic way to do this?
malshe
One of my students codes exclusively in Python. But in most cases newer econometrics methods are implemented in R first. So he just uses rpy2 to call R from his Python code. It works great. For example, recently he performed Bayesian synthetic control using the R code shared by the authors. It required stan backend but everything worked.
jjr8
There is also https://www.rplumber.io/, which lets you turn R functions into REST APIs. Calling R from Python this way will not be as flexible as using rpy2, but it keeps R in its own process, which can be advantageous if you have certain concerns relating to threading or stability. Also, if you're running on Windows, rpy2 is not officially supported and can be hard to get working.
bachmeier
Not sure what you mean by "python backend". If you mean calling R from Python, rpy2 mentioned in the other comment works well. If you mean the other direction, RStudio has this all built in. This is probably the best place to start: https://rstudio.github.io/reticulate/articles/calling_python...
jmalicki
Do you dislike rpy? I've found it to be pretty easy to use.
huijzer
CSV is generally the answer. Unless you need superb performance which generally is not the case.
ebri
Been working 8 years with Rs data.table package in research and now after I changed to the private sector I have to use python and pandas. Pandas are so terrible compared to data.table it defies belief. Even tidyverse is better than pandas which is saying something. I miss it so much
fhsm
Use it every single day. Absolutely fantastic tool.
hughess
This is great - I used to use R all the time when I worked in finance and wish I had this resource back then!
R and RMarkdown were big inspirations for what we're building at evidence.dev now, so very grateful to everyone involved in the R community
loa_observer
I hope gwalkr can be added to this book, it's pretty intereting updates for visualizations in R recent years.
repo: https://github.com/Kanaries/GWalkR site: https://kanaries.net/gwalkr
dikip
[dead]
LostMyLogin
Not to be confused with The Book of R: https://www.amazon.com/Book-First-Course-Programming-Statist...
R especially dplyr/tidyverse is so underrated. Working in ML engineering, I see a lot of my coworkers suffering through pandas (or occasionally polars or even base Python without dataframes) to do basic analytics or debugging, it takes eons and gets complex so quickly that only the most rudimentary checks get done. Anyone working in data-adjacent engineering work would benefit from R/dplyr in their toolkit.