Gemini Robotics
587 comments
·March 12, 2025decimalenough
I always thought that Asimov's Laws of Robotics ("A robot may not injure a human being" etc) were an interesting prop for science fiction, but wildly disconnected from the way computing & robotics actually work.
Turns out he was just writing LLM prompts way ahead of his time.
alternatex
Not only wildly disconnected, but purposefully created to show ambiguity of rules when interpreted by beings without empathy. All of Asimov's books that include the laws also include them being unintentionally broken through some edge-case.
generalizations
It was weird to actually read I, Robot and discover that the entire book is a collection of short stories about those laws going wrong. Far as I know, Asimov never actually told a story where those laws were a good thing.
rcxdude
They aren't generally potrayed as bad, either, just as things which are not as simple as they first appear. Even in the story where the AIs basically run the economy and some humans figure out that they are surreptitiously suppressing opposition to this arrangement (with the hypothesized emergent zeroth law of not allowing humanity to come to harm), Asimov doesn't really seem to believe that this is entirely a bad thing.
amarant
The Foundation series is arguably that, but you only find out in book 14 or so.
theoreticalmal
The 0th law worked out pretty good for Daniel and humanity
mystified5016
Well, "everything worked out according to plan and nobody got hurt" doesn't make for a very interesting story ;)
DrScientist
And obviously all these stories have already been fed into the machine.... :-)
echoangle
> show ambiguity of rules when interpreted by beings without empathy
I don’t think that’s the main problem, there are a lot of moral dilemmas where even humans can’t agree what’s right.
nthingtohide
Humans not agreeing has more to do with the fact that humans are called upon to take decisions with imperfect information under time constraints.
If each human could pause the state of the world and gather all information and then decide, they would act humanely
yreg
Well it's quite difficult to come up with much better rules than Asimov's.
HPMOR offers a solution called 'coherent extrapolated volition' – ordering the super intelligent machine to not obey the stated rules to the letter, but to act in the spirit of the rules instead. Figure out what the authors of the rules would have wished for, even though they failed to put it in writing.
We are debating scifi, of course.
fc417fc802
> Figure out what the authors of the rules would have wished for
What if the original author was from long ago and doesn't share modern sensibilities? Of course you can compensate when formulating them to some extent, but I imagine there will always be potential issues.
taneq
Exactly! That was kind of the point IMO, that human morality was deeply complex and ‘the right thing’ couldn’t be expressed with some trite high level directives.
nthingtohide
All of fiction is a distortion of sorts. Consider Wall-E movie fat people. The AI advancements shown in the movie should transitively imply that biotech, biomedical progress would be so high that we would have solved perfect health by then.
root_axis
Not really. As history shows, progress in one field of science/engineering/philosophy doesn't necessarily imply progress in others.
rcxdude
More just that the rules are actually a summary of a very complex set of behaviours, and that those behaviours can interact with each other and unusual situations in unexpected ways.
krapp
It's funny because Isaac Asimov would have come up with some convoluted logical puzzle to justify why the robot went on a murderous rampage - because in sci-fi robots and AI are all hyperrational and perfectly rational - when in real life you'd just have to explain that your dying grandmother's last wish was to kill all the humans, because a real AI is essentially a dementia-riddled child created from the Lovecraftian pool of chaos and madness that is the internet.
I recall that story of the guy who tried to use AI to recreate his dead friend as a microwave and it tried to kill him[0].
You couldn't sell a sci-fi story where AIs just randomly go insane sometimes and everyone just accepts it as a cost of doing business, and because "humans are worse," but that's essentially reality. At least not as anything but a dark satire that people would accuse of being a bit much.
[0]https://thenextweb.com/news/ai-ressurects-imaginary-friend-a...
fwipsy
Worth noting is that the article is from April 2022 and used gpt-3. The "friend" was an imaginary friend, not a dead friend, and so probably more prone to taking actions which would appear in a fictional context. From my research it looks like the base gpt-3 model was just a text predictor without any RLHF or training to be helpful/harmless.
Certainly AI safety isn't perfect, but if you're going to criticize it at least criticize the AIs people actually use today. It's like arguing cars are unsafe and pointing to an old model without seatbelts.
It's not surprising at all that people are willing to use AIs even if they give dangerous answers sometimes, because they are useful. Surely they're less dangerous than cars or power tools or guns, and all of those have legitimate uses which make them worth the risk (depending on your risk tolerance.)
krapp
Fair enough but it's still wild that the template for AI and robots in science fiction has (usually) been to portray them as hyper-rational, competent and logical even to a fault, but the most accurate prediction of what AI and robots turn out to be is probably Star Wars, not anything by Asimov.
paulryanrogers
AIs that eventually go insane is a sci-fi trope that it appeared in the Halo (2, 3, Reach?) videogames.
krapp
And 2001, and I Robot and the Matrix and Star Trek. I should have let that comment percolate a bit.
BWStearns
It was a big miss calling them "prompt engineers" and not robopsychologists.
advisedwang
Everyone wants engineer in their title, it adds at least $150k/year
pjerem
Oh ! You are right ! I always thought the same.
And now I wouldn’t even trust them to understand the laws 100% of the time.
diwank
Same. I guess in so many ways, he was remarkably prescient. Anthropic’s Constitutional AI approach is pretty much a living example
devit
The pioneer of AI alignment.
lfsh
I use CNC machines and know how powerful stepper and servo motors are. You can ask yourself what will happen if your motor driver is controlled by an AI hallucination...
truculent
If you want software to exhibit human values, the development process probably looks more like education or parenting prompting.
Or so says Ted Chiang: https://en.m.wikipedia.org/wiki/The_Lifecycle_of_Software_Ob...
cjmcqueen
If this makes it easier and faster to sort garbage, we could probably improve the efficiency of recycling 100x. I know there are some places that do that already, but there are so many menial tasks that could be done by robots to improve the world.
decimalenough
There are plenty of places [1] where garbage is sorted for free by poor people who scrape a living from recycling it.
Sorting garbage is a terrible job for humans, but it's a terrible one for robots too. Those fancy mechanical actuators etc are not going to stand up well to garbage that's regularly saturated with liquids, oil, grease, vomit, feces, dead animals, etc.
[1] https://loe.org/shows/segments.html?programID=96-P13-00022&s...
tkzed49
are you implying that society shouldn't aim to reduce human interaction with vomit, feces, and dead animals? Robotics in harsh environments isn't unheard of
ggm
I think they're pointing out you need to be cautious assuming a robot can be economically, sustainably deployed to do jobs in environments which are challenging for electro-mechanical systems.
An example: A friend worked on accurate built-in weighing machines for trucks, which could measure axle weight and load balance to meet compliance for bridges and other purposes. He found it almost impossible to make units which could withstand the torrents of chemical and biological wet materials which regularly leak into a truck. You would think "potting" electronics understands this problem but even that turns out to have severe limits. It's just hard to find materials which function well subjected to a range of chemicals. Stuff which is flexible is especially prone to risks here: the way you make things flex is to use softeners, which in turn make the material for other reasons have other properties like porosity, or being subject to attack by some combinations of acid and alkalai.
These units had NO MOVING PARTS because they were force tranducers. They still routinely failed in service.
Rubbish includes bleaches, acids, complex organics, grease, petrochemicals, waxes, catalyst materials, electricity, reactive surfaces, abrasives, sharp edges..
They are not saying "dont try" they are saying "don't be surprised if it doesn't work at scale, over time"
pjerem
> human interaction with vomit, feces, and dead animals
Humans can generally stand this without an issue.
In fact you wouldn’t replace a lot of jobs that involves this : doctors, nurses, emergency workers, caregivers…
It just happens to be difficult. But people love doing difficult things as long as it’s : a) rewarding, b) respected, and c) sufficiently paid
dyauspitr
I think it’s pretty straightforward to cover the entire torso of the robot with a plastic covering.
genewitch
Why does it even need to be that type of robot, a conveyor that has items on it, but its a mesh, a camera looks, and if something can be sorted just use compressed air to move it to a collection area/bin. Put an electromagnet at the start of the conveyor that can move on a gantry to another bin.
Why's everything gotta have arms and graspers it's so inefficient.
Robots aren't climbing trees or chasing food. They don't need tails, either.
xyst
Have seen demos where garbage sorting has been automated. No AI necessary.
Just had cameras, visual detection, some compressed air nozzles, and millisecond (nanosecond?) reaction time to separate the non-recyclable materials.
omneity
It's funny that we are at a point where "visual detection" is not considered AI anymore.
thrdbndndn
Some (most?) of these aren't really AI-based at all. For example. traditional optical sorters typically rely on the reflectivity of materials at one or a few laser wavelengths directed onto the material.
The mapping between sensor signals and material types is usually hardcoded from laboratory test results.
devmor
Using AI for image recognition is to visual detection as Orange Juice is to beverages.
LargoLasskhyfv
About 199x, Dortmund, Germany...
Lead to nothing. At least not at the time. AFAIK the initial garbage stream is still manually inspected and separated at most sites.
And the people doing that have a much higher risk of getting sick, because of all sorts of bacteria, mold, spores, chemicals, VOC, whatever.
Not to mention the stink.
genewitch
Haha I just came up with that off the hip (never heard of, seen, or even contemplated sorting garbage before) because the idea that this needs articulation and graspers is the height of "we're VC funded and don't care about anything except runway". Laughable.
hakaneskici
WALL-E would get lots of funding as a robot entrepreneur at the YC demo day today ;)
bamboozled
I don't think the issue with recyling is just sorting? Plenty of sorted garbage has gone unrecycled.
shw1n
I helped a friend of mine’s company (CleanRobotics) service his trashbots that sorted landfill/recycling in shopping malls
They used AI to identify and sort
One issue was just the sheer muck of trash, if someone dropped an open smoothie, all sorts of sensors got covered, etc
Really cool idea I thought though
recycledmatt
Folks in the industry are certainly thinking about this. The economic forces at play could be huge.
dchristian
Check out: https://ampsortation.com
recycledmatt
The nuanced answer to this is they have a first mover advantage and make a great robot. The point of the thread is that new development is much cheaper for folks to figure it out. Recyclers are the most entrepreneurial people you will ever meet. we’ll figure out some good uses for this stuff when it gets cheaper.
recycledmatt
Super familiar. Thanks!
stefan_
If you can recognize what garbage to yeet, you can already yeet it today. You don't need a terribly slow robot arm to do it.
appleorchard46
Yeah, maybe someone with more industry knowledge can give a better picture, but I have a hard time seeing how these robots would fit into and improve existing processes [0]. Garbage is mechanically sorted most of the way already; then IR is used to identify different plastics and air blasts are used to separate them out at dozens per second.
The Gemini robot tech is cool as heck, don't get me wrong, but it doesn't seem particularly well suited to industrial automation.
ghostly_s
The problem with recycling is not sorting, it's that plastic being recyclable is a myth.[1]
1. https://www.pbs.org/wgbh/frontline/documentary/plastic-wars/
mclau156
I dont see why a Gemini robot couldn't grab 20 dark clothing items from a hamper, put them in the laundry machine, wait an hour, take them out and put them in the dryer while I was at work (thanks return to office)
lallysingh
Who's "you" here? The person at home, an employee at a recycling center, or garbage dump?
stefan_
The vision models already filtering recycling today? And in a million other industrial processes?
thatsallfolkss
reminds me of this rust conf talk: https://m.youtube.com/watch?v=TWTDPilQ8q0
piokoch
I doubt anyone would use this kind of fancy machine to garbage handling until they become a commodity. I would bet that the first application would be to send those robots to trenches and foxholes...
XorNot
Ground based robotics to fight wars is an expensive way to not do what an aerial drone can.
You can just send explosives into both those things, and it's cheaper and more effective.
daemonologist
There's one shot that stood out to me, right at the end of the main video, where the robot puts a round belt on a pulley: https://youtu.be/4MvGnmmP3c0?si=f9dOIbgq58EUz-PW&t=163 . Of course there are probably many examples of this exact action in its training data, but it felt very intuitive in a way the shirt-folding and object-sorting tasks in these demos usually don't.
(Also there seems to be some kind of video auto-play/pause/scroll thing going on with the page? Whatever it is, it's broken.)
05
It felt extra fake - the cherry picked people lacking rudimentary mechanical skills, using the ~$50K set of Franka Emika arms vs their default 'budget' ALOHA 2 grippers, the sheer luck that helped the robots put the belt on instead of removing it from the pulley.
The trick was in that the belt was too tight for an average human to put on with brute force, and disabling the tensioner or using tricks would require better than average mechanical skills their specially chosen 'random humans' lacked.
CamperBob2
Yeah, they went WAY over the top when they told the human to "make it look hard." A significant distraction from how impressive the robot actually is.
namaria
All while the robot video was at 3x speed to even keep up with the human
daveguy
I slowed it down to 1/4 speed to check -- the autonomous video is sped up 3x, but the human video seems to be 1x. I say that because generally no one moves that slowly for a physical task, not just in the "problem solving" aspect, but also in the "getting a belt to the gears" aspect. So, it appears that the robot did a better job than the human, but I believe the human only spent 1/3 of the time in the clip. After stretching the belt, it was probably put on easily, and likely the human still completed the task in 2/3 of the time of the robot.
Reference video (saw your clip is robot-only, but the robot vs human video is more telling):
fuzzythinker
Earlier in the video, where it was going to fold a "fox", I was expecting a fox, but a fox face. I know I should have high expectations at this point, but was disappointed from the result given the prompt.
GolfPopper
Does no one remember the last Google Gemini super-impressive demo that blew everyone away was faked?
https://techcrunch.com/2023/12/07/googles-best-gemini-demo-w...
teaearlgraycold
I view these demos with a heaping cup of salt.
ipv6ipv4
There was also the AI that would handle restaurant reservations over the phone.
AtomBalm
Don’t Be Evil
… but don’t disappoint shareholders…
metayrnc
I am not sure whether the videos are representative of real life performance or it is a marketing stunt but sure looks impressive. Reminds of the robot arm in Iron Man 1.
ksynwa
AI demos and even live presentations have exacerbated my trust issues. The tech has great uses but there is no modesty from the proprieters.
Miraste
Google in particular has had some egregiously fake AI demos in the past.
throwaway314155
> Reminds of the robot arm in Iron Man 1.
It's an impressive demo but perhaps you are misremembering Jarvis from Iron Man which is not only far faster but is effectively a full AGI system even at that point.
Sorry if this feels pedantic, perhaps it is. But it seems like an analogy that invites pedantry from fans of that movie.
Philpax
The robot arms in the movie are implied to have their own AIs driving them; Tony speaks to the malfunctioning one directly several times throughout the movie.
Jarvis is AGI, yes, but is not what's being referred to here.
throwaway314155
Ah good point!
whereismyacc
i thought it was really cool when it picked up the grapes by the vine
edit: it didn't.
yorwba
Here it looks like its squeezing a grape instead: https://www.youtube.com/watch?v=HyQs2OAIf-I&t=43s Bit hard to tell whether it remained intact.
flutas
The leaf on the darker grapes looks like a fabric leaf, I'd kinda bet they're all fake for these demos / testing.
Don't need the robot to smash a grape when we can use a fake grape that won't smash.
whereismyacc
welp i guess i should get my sight checked
saberience
[flagged]
nomel
This is, nearly exactly, like saying you've seen screens slowly display text before, so you're not impressed with LLM.
How it's doing it is the impressive part.
asadm
the difference is the dynamic nature of things here.
Current arms and their workspaces are calibrated to mm. Here it's more messy.
Older algorithms are more brittle than having a model do it.
KoolKat23
For the most part that's been on known objects, these are objects it has not seen.
mkagenius
Not specifically trained on but most likely the Vision models have seen it. Vision models like Gemini flash/pro are already good at vision tasks on phones[1] - like clicking on UI elements and scrolling to find stuff etc. The planning of what steps to perform is also quite good with Pro model (slightly worse than GPT 4o in my opinion)
1. A framework to control your phone using Gemini - https://github.com/BandarLabs/clickclickclick
jwblackwell
The upshot of this is that anyone will be able to order a couple of robot arms from China and then set them up in a garage, programming them with just text, like we do with LLMs now.
Time to think bigger.
muzani
"Time to think bigger."
I want to strap robot arms to paralyzed people so they could walk around, pick up stuff, and climb buildings with them.
ethan_smith
Climb buildings? ಠ_ಠ
opwieurposiu
Hopefully they invent some kind of sticky gripper instead of just smashing all the windows like Doctor Octopus.
muzani
Yes, sadly, not many places are wheelchair friendly.
ur-whale
> Climb buildings? ಠ_ಠ
Doc Oc style.
mannycalavera42
it's called revenge climbing :-)
numba888
you probably need robotic leg for walking. or better pony. but doing anything physically requires at least working torso.
ddalex
> programming them with just text
Isn't programming just text anyway ?
null
dinkumthinkum
I guess the question is where will they get the money to order those things?
jwblackwell
The cost of robotics is coming down, check out Unitree. A couple of robot arms would cost about the same as a minimum wageworker for 1 year right now. But of course they can go virtually 24/7 so likely 1/3rd the cost
danans
Not the OP, but I think you might have missed their point, which I think was: if robots take away people's jobs, how will said people afford robots.
danavar
Or put a few 6 axis arms on a track that goes throughout a home and have an instant home assistants
sottol
> Time to think bigger.
Ehh, no need - just let the LLM figure out what to build in your garage.
calmbonsai
The issues with all of these robotic demo videos is "repeatability" and "noise tolerance".
Can these spatial reasoning and end-effector tasks be reliably repeated or are we just looking at the robotic equivalent of "trick-shots" where the success percentile is in the single digits?
I'd say Okura and Vinci are the current leaders in multi-axis multi-arm end-effectors and they have nothing like this.
YeGoblynQueenne
No, it's the WYSIWYG model of robotics: the robot can do exactly what you see in the demo
e.g. the robot can put that particular fake banana in that particular bowl placed in that particular location. Give it another banana and another bowl and run for cover.
gatinsama
The problem with Google is that their ad business brings so much revenue that no other product makes sense. They will use whatever they learn with robots to raise their ad revenue, somehow.
echelon
Google uses their insane ad revenue to subsidize the Xerox Parc / Bell Labs of the current generation. Waymo, DeepMind, Gemini Robotics. They're killing it and leading the entire market.
It's not just researchers. Engineers at Google get to spin up products and throw spaghetti at walls, too. Google has more money than God to throw around.
Google's ad dominance will probably never go away unless antitrust action by the FTC/DOJ/EU force a breakup. So they'll continue to lead as long as they can subsidize and outspend everyone else. Their investments compound and give an almost unassailable moat into deep tech problems.
Google might win AI and robotics and transportation and media and search 2.0. They'll own everything.
tsunamifury
Google has been looking for post-ad post-search revenue for almost a decade now. They certainly won't dominate forever and have several signals flashing red for a few years now.
orangecat
Google has been looking for post-ad post-search revenue for almost a decade now
With a reasonable degree of success. In their last quarter (see https://abc.xyz/investor/earnings/) 25% of their revenue was non-ads, and that percentage has been consistently increasing.
echelon
YouTube has bigger revenues than Netflix. While the majority of that revenue is from ads, they get it by providing immense value in the form of near-unlimited entertainment.
That's just one of their many business units.
riku_iki
> Google's ad dominance will probably never go away unless antitrust action by the FTC/DOJ/EU force a breakup.
chatgpt has good chance to kill google search -> kill google.
IX-103
If Chatgpt replaces Google search then it will be effectively signing it's own death warrant.
ML models like chatGPT rely on the open web for training data, particularly for information about recent events. But models like chatGPT are horrible about linking to their sources. That means that sources that rely on ad revenue or even donations to exist will effectively disappear as chatGPT steals all of the traffic. With no cash flow, the sites with current data disappear. With no new training data, chatGPT stagnates.
ChatGPT is basically a parasite on the free and open web - taking content but not giving back.
kevinventullo
It's not just researchers. Engineers at Google get to spin up products and throw spaghetti at walls, too.
This might have been true 10-15 years ago. I assure you it is not the case today.
happyopossum
Yup, it still is. Maybe it was more prevalent or expected 15 years ago, but it still happens all the time today.
lallysingh
Gcloud is a running business, and AI is a billable service in it. There's a strong incentive to branch out from 1 line of business, especially as AIe can replace regular Google search and the web browsing that shows Google ads.
Search is in real danger of mostly obsolescence. Ads aren't safe.
Powdering7082
Waymo seems to be a counter example here
randyrand
Waymo took 15 years and $30B to develop and is still unprofitable. By the time they make their money back it'll probably be too late.
robotresearcher
They’ll never make their money back. Autonomous driving is mostly software and will be commoditized very shortly after it works well.
There’s not enough money paid to drivers in the world today to repay the investment in autonomous driving from direct revenues. It’ll be an expected feature of most cars, and priced at epsilon.
Autonomous driving and the attendant safety improvements will turn out to be a gift to the world paid for by Google ad revenue, startup investors, and later, auto companies.
rglover
My bet is on transparent, contextual ads. Assuming the product from all of this is having a robot in your house, when you're doing something like cooking, it will say things like "have you considered trying an oat milk base? Oatly is a great option. I can Doordash some for you if you'd like..."
nick111631
It'll be simpler than that. For every 5 minutes of robot work you have to watch 30 seconds of unskippable video ads or else it quits working.
daveguy
Ugh... Please not the Alexa model of pushing products and services.
jcims
I worked there briefly. This was my enduring impression. Met some incredibly smart people, but so much of it was weird and seemingly pointless.
gerash
The problem with HN is people with little expertise in an area confidently claim things that are woefully wrong
toddmorey
That actually doesn’t counter the argument
bloomingkales
You don't think a walking talking robotic salesmen is a boon for their ad business?
whimsicalism
why do the people on this website have such obviously flawed world models
tim333
It's kind of like that on all websites, or worse.
greenchair
question for the robot experts: what is the limitation that makes the movements so slow? for example when it picks up the ball and puts it in the basket. why couldn't that movement be done much faster?
n_ary
From university, I vaguely recall that, I had to implement a lot of feedback and correction calculations when working on industrial robotic arms. Usually too much speed causes overshooting(going the wrong trajectory or away from target). The feedback is constantly adjusted until the target is reached, hence a lot of expensive computation and readjustment from all the sensor feeds. Additionally, faster movement also has risk of damaging nearby objects when overshoot happens and also harms/degrades the joints faster. For a simpler example, think about the elevator, what would happen if it were to go up/down very fast, how would you tweak your PID controller to handle super fast movement to not throw your passengers when you need to correctly align and halt at the target floor….
LZ_Khan
Camera feed processing latency would be my guess. The system needs to make sense of a continuous video feed so moving slower reduces how much happens in between frames.
cmarschner
In this case it’s the model. There’s an insane amount of computation that should happen in milliseconds but given today’s hardware might run 10 times too slow. Mind you these models take in lots of sensor data and spit out trajectories in a tight feedback loop.
yojo
I’m no robotics expert, but look how close the robots are to squishy human meat bags.
I assume Google is being very careful to keep the speeds well below the “oops, it took your jaw off” threshold.
mrshadowgoose
This is highly unlikely to be a mechanical limitation of the robotic arms. As others have said, it's likely an inference speed limitation - their model is understanding, reacting, and producing outputs as fast as its supporting hardware can.
But that all just poofs away in a year or two as inferencing hardware gets better/faster. And for many use cases, the slowness/awkwardness doesn't really matter as long as the job gets done.
"AI working in meatspace" was supposed to be hard, and its rapidly becoming clear that isn't going to be the case at all.
YeGoblynQueenne
Robot demos have been stuck at that sort of speed for more than a decade and they didn't have to wait for a giant LLM to do inference. Why's that going to get any better now? And just in a year or two?
CamperBob2
F=ma. An arm that's powerful enough to move extremely quickly is powerful enough to hurt.
1970-01-01
I'm not a robot expert, but I do know the answer is simply safety. Once it learns what to do, it can do it faster and faster, but when something goes very wrong, it will go very wrong.
daralthus
inference speed of the models is probably the bottleneck
underdeserver
We're witnessing the robot apocalypse coming at us in slow motion. It's coming gradually, until one day it'll come suddenly.
YeGoblynQueenne
It's slow motion because they only speed it up by 3-5x. If you play the videos at 30x then it will really look like it's coming at us at full speed.
darkhorse222
Since profit controls everything in this society and we are in a regulatory capture government, there is only incentives to build murder robots, not disincentives.
roughly
The distinction today is that the murderbots work in the back office of your health insurance company.
Yet again, ours proves to be a really boring dystopia.
intrasight
Most everything comes slowly and then all at once. Technology. Bankruptcy. Death.
Animats
I'd like to see more about what the Gemini system actually tells the robot. Eventually, it comes down to motor commands. It's not clear how they get there.
system2
I think they call it Gemini while it is a voice activated robotic arm that is very well designed. I doubt this has anything to do with LLC other than communicating with the robotic software.
null
Here's the link to the full playlist with 20 video demonstrations (around 1min each) on YouTube: https://www.youtube.com/watch?v=4MvGnmmP3c0&list=PLqYmG7hTra...