tags:
- stub
Programming prerequisites, listed in Coding Basics for now
When you open a new robot project you'll see a lot of files we'll interact with.
src/main
deploy
java
frc/frc/robot
commands
ExampleCommand.java
subsystems
ExampleSubsystem.java
Constants.java
Main.java
Robot.java
RobotContainer.java
vendordeps
For typical projects, you'll be spending the most time in RobotContainer
, subsystems
, and occasionally commands
.
For some early practice projects, or special use cases you might also interact with Robot.java
a bit.
Many helpful utilities we'll use for robot projects are represented using code that's not available by default. WPILib has a small manager to assist with installing these, detailed here:
We'll also utilize a number of software tools for special interactions with hardware or software components. Some of these include
The hardest part of getting started with robots is figuring out where your robot code goes.
Robot.java
is a very powerful file, and it's possible to write your entire robot in just this one file! For reasons we'll get into, we do not want to do this. However, the setup of it does a good job explaining how a robot works. Let's look at the structure of this file for now
public class Robot extends TimedRobot {
private Command m_autonomousCommand;
private final RobotContainer m_robotContainer;
public Robot() {
m_robotContainer = new RobotContainer();
}
public void robotPeriodic() {}
public void disabledInit() {}
public void disabledPeriodic() {}
public void autonomousInit() {}
public void autonomousPeriodic() {}
public void teleopInit() {}
public void teleopPeriodic() {}
public void testInit() {}
public void testPeriodic() {}
//a few more ignored bits for now
}
From the pairing, we can group these into several different modes
Indeed, if we look at our Driver Station, we see several modes mentioned.
Teleop, Auto, and Test are simply selectable operational modes. However, you might want to utilize each one slightly differently.
"Practice mode" is intended to simulate real matches: This DriverStation mode runs Autonomous mode for 15 seconds, and then Teleop Mode for the remainder of a match time.
"Disabled" mode is automatically selected whenever the robot is not enabled. This includes when the robot boots up, as well as as whenever you hit "disabled" on the driver station.
Disabled mode will also cancel any running Commands .
"Robot mode" isn't an explicit mode: Instead, of "Robot Init", we just use the constructor: It runs when the robot boots up. In most cases, the primary task of this is to set up Robot Container.
robotPeriodic
just runs every loop, regardless of what other loop is also running.
We can also see a grouping of
We generally won't add much code in Robot.java, but understanding how it works is a helpful starting point to understanding the robot itself.
As mentioned above, the "Robot Container" is created as the robot boots up. When you create a new project,
This file contains a small number of functions and examples to help you organized.
public class RobotContainer(){
ExampleSubsystem subsystem = new ExampleSubsystem();
ExampleCommand command = new ExampleCommand();
CommandXboxJoystick joystick = new CommandXboxJoystick(0);
RobotContainer(){
configureBindings();
}
public void configureBindings(){
//Not a special function; Just intended to help organize
}
public Command getAutonomousCommand(){/*stuff*/}
}
This file introduces a couple new concepts
The use of Commands and Subsystems goes a long way to managing complex robot interactions across many subsystems. However, they're certainly tricky concepts to get right off the bat.
Sometimes, you'll have oddball constants that you need to access in multiple places in your code. Constants.java advertises itself as a place to sort and organize files.
Without getting too into the "why", in general you should minimize use of Constants.java; It leads to several problems as your robot complexity increases.
Instead, simply follow good practices for scope encapsulation, and keep the constants at the lowest necessary scope.
If you find yourself depending on a lot of constants, you might need to consider Refactoring your code a bit to streamline things. Note that Stormbots code has almost nothing in here!
Requires:Robot Code Basics
Recommends:Commands
This documentation assumes you have the third party Rev Library installed. You can find instructions here.
https://docs.wpilib.org/en/latest/docs/software/vscode-overview/3rd-party-libraries.html
This document also assumes correct wiring and powering of a motor controller. This should be the case if you're using a testbench.
// Robot.java
public Robot extends TimedRobot{
public void teleopPeriodic(){
}
}
aliases:
- Command
Requires:
Robot Code Basics
You can learn this without having done Motor Control, but it's often more fun to learn alongside it in order to have more interesting, visual commands while experimenting.
The commands provided as an example just print little messages visible in the RioLog, allowing this to be set up without motors
A Command is an event driven code structure that allows you manage when code runs, what resources it uses, and when it ends.
In the context of a robot, it allows you to easily manage a lot of the complexity involved with managing multiple Subsystems
The code structure itself is fairly straightforward, and defines a few methods; Each method defines what code runs at what time.
class ExampleCommand extends CommandBase{
public ExampleCommand(){}
public void initialize(){}
public void execute(){}
public boolean isFinished(){ return false; }
public void end(boolean cancelled){}
}
Behind the scenes, the robot runs a command scheduler, which helps manage what runs when. Once started, a command will run according to the following flowchart, more formally known as a state machine.
This is the surface level complexity, which sets you up for how to view, read, and write commands.
A key aspect of Commands is their ability to claim temporary, exclusive ownership over a Subsystem . This is done by passing the subsystem into a command, and then adding it as a requirement
class ExampleCommand extends CommandBase{
public ExampleCommand(ExampleSubsystem subsystemName){
addRequirements(subsystemName);
}
Now, whenever the command is started, it will forcibly claim that subsystem. It'll release that claim when it runs it's end() block.
This ability of subsystems to hold a claim on a resource has a lot of utility. The main value is in preventing you from doing silly things like trying to tell a motor to go forward and backward at once.
Now that we've established subsystem ownership, what happens when you do try to tell your motor to go forward and then backward?
When you start the command, it will forcibly interrupt other commands that share a resource with it, ensuring that the new command has exclusive access.
It'll look like this
When a command is cancelled, the command scheduler runs the commands end(cancelled)
block, passing in a value of true. Whole not typical, some commands will need to do different cleanup routines depending on whether they exited on a task completion, or if something else kicks em off a subsystem.
Commands can be started in one of 3 ways:
.schedule()
method.They can be stopped via a few methods
true
from it's isFinished() method.cancel()
method directlyIt's often the case that a subsystem will have a clear, preferred action when nothing else is going on. In some cases, it's stopping a spinning roller, intake, or shooter. In others it's retracting an intake. Maybe you want your lights to do a nice idle pattern. Maybe you want your chassis joystick to just start when the robot does.
Default commands are ideal for this. Default commands run just like normal commands, but are automatically re-started once nothing else requires the associated subsystem resource.
Just like normal command, they're automatically stopped when the robot is disable, and cancelled when something else requires it.
Unlike normal commands, it's not allowed to have the command return true from isFinished()
. The scheduler expects default commands to run until they're cancelled.
Also unlike other commands, a subsystem must require the associated subsystem, and cannot require other subsystems.
It's worth making a note that a Default Command cannot start during a Command Group that contains a command requiring the subsystem! If you're planning complex command sequences like an auto, make sure they don't rely on DefaultCommands as part of their operation.
As you're writing new subsystems, make sure you consider whether you should require a subsystem.
You'll always want to require subsystems that you will modify, or otherwise need exclusive access to. This commonly involves commands that direct a motor, change settings, or something of that sort.
In some cases, you'll have a subsystem that only reads from a subsystem. Maybe you have an LED subsystem, and want to change lights according to an Elevator subsystems's height.
One way to do this is have a command that requires the LEDs (needs to change the lights), but does not require the Elevator (it's just reading the encoder).
As a general rule, most commands you write will simply require exactly one subsystem. Commands that need to require multiple subsystems can come up, but typically this is handled by command composition and command groups.
Every new project will have an example command in a dedicated file, which should look familiar
class ExampleCommand extends CommandBase{
public ExampleCommand(){
//Runs once when the command is created as the robot boots up.
//Register required subsystems, if appropriate
//addRequirements(subsystem1, subsystem2...);
}
public void initialize(){
//Runs once when the command is started/scheduled
}
public void execute(){
//Runs every code loop
}
public boolean isFinished(){
//Returns true if the command considers it's task done, and should exit
return false;
}
public void end(boolean cancelled){
//Perform cleanup; Can do different things if it's cancelled
}
}
This form of command is mostly good for instructional purposes while you're getting started.
On more complex robot projects, trying to use the file-based Commands forces a lot of mess in your Subsystems; In order for these to work, you need to make many of your Subsystem details public, often requiring you to make a bunch of extra functions to support them.
Command factories are the optimal way to manage your commands. With this convention, you don't create a separate Command files, but create methods in your Subsystem that build and return new Command objects. This convention is commonly called a "Factory" pattern.
Here's a short example and reference layout:
//In your subsystem
Roller extends SubsystemBase{
Roller(){}
public Command spinForward(){
return Commands.run(()->{
System.out.println("Spin Forward!!");
},this);
}
}
//In your robotContainer, let's create a copy of that command
RobotContainer{
RobotContainer(){
joystick.a().whileTrue(roller.spinForward());
}
}
That's it! Not a lot of code, but gives you a flexible base to start with.
This example uses Commands.run()
one of the many options in the Commands Class. These command shortcuts let you provide Lambdas representing some combination of a Command's normal Initialize, Execute, isFinished, or End functions. A couple notable examples are
Most commands you'll write can be written like this, making for simple and concise subsystems.
Many Commands
helpers require you to provide the required subsystem after the lambdas. If you forget, you can end up with multiple commands fighting to modify the current subsystem
Building on the above, Subsystems have several of these command helpers build in! You can see this.startRun(...)
, this.run(..)
etc; These commands work the same as the Commands.
versions, but automatically include the current subsystem.
There's a notable special case in new FunctionalCommand(...)
, which takes 4 lambdas for a full command, perfectly suitable for those odd use cases.
The real power of commands comes from the Command Compositions , and "decorator" functions. These functions enable a lot of power, allowing you to change how/when commands run, and pairing them with other commands for complex sequencing and autos.
For now, let's focus on the two that are more immediately useful:
command.withTimeout(time)
, which runs a command for a set duration. command.until(()->someCondition)
, which allows you to exit a command on things like sensor inputs. Commands also has some helpful commands for hooking multiple commands together as well. The most useful is a simple sequence.
Commands.sequence(
roller.spinForward().withTimeout(0.1),
roller.spinBackward().withTimeout(0.1),
roller.spinForward().withTimeout(0.5)
)
tags:
- stub
Requires:
Commands
Motor Control
tags:
- stub
aliases:
- Rollers
- Roller
Requires:
Motor Control
Recommends:
FeedForwards
PID
tags:
- stub
Requires:
Motor Control
Feedforwards model an expected motor output for a system to hit specific target values.
The easiest example is a motor roller. Let's say you want to run at ~3000 RPM. You know your motor has a top speed of ~6000 RPM at 100% output, so you'd correctly expect that driving the motor at 50% would get about 3000 RPM. This simple correlation is the essence of a feed-forward. The details are specific to the system at play.
The WPILib docs have good fundamentals on feedforwards that is worth reading.
https://docs.wpilib.org/en/stable/docs/software/advanced-controls/controllers/feedforward.html
Feed-forwards are specifically tuned to the system you're trying to operate, but helpfully fall into a few simple terms, and straightforward calculations. In many cases, the addition of one or two terms can be sufficient to improve and simplify control.
The simplest feedforward you'll encounter is the "static feed-forward". This term represents initial momentum, friction, and certain motor dynamics.
You can see this in systems by simply trying to move very slow. You'll often notice that the output doesn't move it until you hit a certain threshhold. That threshhold is approximately equal to kS.
The static feed-forward affects output according to the simple equation of
a kG value effectively represents the value needed for a system to negate gravity.
Elevators are the simpler case: You can generally imagine that since an elevator has a constant weight, it should take a constant amount of force to hold it up. This means the elevator Gravity gain is simply a constant value, affecting the output as
A more complex kG calculation is needed for pivot or arm system. You can get a good sense of this by grabbing a heavy book, and holding it at your side with your arm down. Then, rotate your arm outward, fully horizontal. Then, rotate your arm all the way upward. You'll probably notice that the book is much harder to hold steady when it's horizontal than up or down.
The same is true for these systems, where the force needed to counter gravity changes based on the angle of the system. To be precise, it's maximum at horizontal, zero when directly above or below the pivot. Mathematically, it follows the function
This form of the gravity constant affects the output according to
The velocity feed-forward represents the expected output to maintain a target velocity. This term accounts for physical effects like dynamic friction and air resistance, and a handful of
This is most easily visualized on systems with a velocity goal state. In that case,
In contrast, for positional control systems, knowing the desired system velocity is quite a challenge. In general, you won't know the target velocity unless you're using a Motion Profiles to to generate the instantaneous velocity target.
The acceleration feed-forward largely negates a few inertial effects. It simply provides a boost to output to achieve the target velocity quicker.
like
Putting this all together, it's helpful to de-mystify the math happening behind the scenes.
The short form is just a re-clarification of the terms and their units
A roller system will often simply be
An elevator system will look similar:
Lastly, elevator systems differ only by the cosine term to scale kG.
Of course, the intent of a feed-forward is to model your mechanics to improve control. As your system increases in complexity, and demands for precision increase, optimal control might require additional complexity! A few common cases:
Since a feed-forward is prediction about how your system behaves, it works very well for fast, responsive control. However, it's not perfect; If something goes wrong, your feed-forward simply doesn't know about it, because it's not measuring what actually happens.
In contrast, feed-back controllers like a PID are designed to act on the error between a system's current state and target state, and make corrective actions based on the error. Without first encountering system error, it doesn't do anything.
The combination of a feed-forward along with a feed-back system is the power combo that provides robust, predictable motion.
WPILib has several classes that streamline the underlying math for common systems, although knowing the math still comes in handy! The docs explain them (and associated warnings) well.
https://docs.wpilib.org/en/stable/docs/software/advanced-controls/controllers/feedforward.html
Integrating in a robot project is as simple as crunching the numbers for your feed-forward and adding it to your motor value that you write every loop.
ExampleSystem extends SubsystemBase(){
SparkMax motor = new SparkMax(...)
// Declare our FF terms and our object to help us compute things.
double kS = 0.0;
double kG = 0.0;
double kV = 0.0;
double kA = 0.0;
ElevatorFeedforward feedforward = new ElevatorFeedforward(kS, kG, kV, kA);
ExampleSubsystem(){}
Command moveManual(double percentOutput){
return run(()->{
var output ;
//We don't have a motion profile or other velocity control
//Therefore, we can only assert that the velocity and accel are zero
output = percentOutput+feedforward.calculate(0,0);
// If we check the math, this feedforward.calculate() thus
// evaluates as simply kg;
// We can improve this by instead manually calculating a bit
// since we known the direction we want to move in
output = percentOutput + Math.signOf(percentOutput) + kG;
motor.set(output);
})
}
Command movePID(double targetPosition){
return run(()->{
//Same notes as moveManual's calculations
var feedforwardOutput = feedforward.calculate(0,0);
// When using the Spark closed loop control,
// we can pass the feed-forward directly to the onboard PID
motor
.getClosedLoopController()
.setReference(
targetPosition,
ControlType.kPosition,
ClosedLoopSlot.kSlot0,
feedforwardOutput,
ArbFFUnits.kPercentOut
);
//Note, the ArbFFUnits should match the units you calculated!
})
}
Command moveProfiled(double targetPosition){
// This is the only instance where we know all parameters to make
// full use of a feedforward.
// Check [[Motion Profiles]] for further reading
}
}
When tuning feed-forwards, it's helpful to recognize that values being too high will result in notable problems, but gains being too low generally result in lower performance.
Just remember that the lowest possible value is 0; Which is equivalent to not using that feed forward in the first place. Can only improve from there.
These two terms are defined at the boundary between "moving" and "not moving", and thus are closely intertwined. Or, in other words, they interfere with finding the other. So it's best to find them both at once.
It's easiest to find these with manual input, with your controller input scaled down to give you the most possible control.
Start by positioning your system so you have room to move both up and down. Then, hold the system perfectly steady, and increase output until it just barely moves upward. Record that value.
Hold the system stable again, and then decrease output until it just barely starts moving down. Again, record the value.
Thinking back to what each term represents, if a system starts moving up, then the provided input must be equal to
Helpfully, for systems where
For pivot/arm systems, this routine works as described if you can calculate kG at approximately horizontal. It cannot work if the pivot is vertical. If your system cannot be held horizontal, you may need to be creative, or do a bit of trig to account for your recorded
Importantly, this routine actually returns a kS that's often slightly too high, resulting in undesired oscillation. That's because we recorded a minimum that causes motion, rather than the maximum value that doesn't cause motion. Simply put, it's easier to find this way. So, we can just compensate by reducing the calculated kS slightly; Usually multiplying it by 0.9 works great.
Because this type of system system is also relatively linear and simple, finding it is pretty simple. We know that
We know
This means we can quickly assert that
Beyond roller kV, kA and kV values are tricky to identify with simple routines, and require Motion Profiles to take advantage of. As such, they're somewhat beyond the scope of this article.
The optimal option is using System Identification to calculate the system response to inputs over time. This can provide optimal, easily repeatable results. However, it involves a lot of setup, and potentially hazardous to your robot when done without caution.
The other option is to tune by hand; This is not especially challenging, and mostly involves a process of moving between goal states, watching graphs, and twiddling numbers. It usually looks like this:
This process benefits from a relatively low P gain, which helps keep the system stable. Once your system is tuned, you'll probably want a relatively high P gain, now that you can assert the feed-forward is keeping your error close to zero.
Note, you might observe that the kCos output,
tags:
- stub
Requires
Commands
Encoder Basics
TODO:
Add some graphs
https://github.com/DylanHojnoski/obsidian-graphs
Write synopsis
https://docs.revrobotics.com/revlib/spark/closed-loop
A PID system is a Closed Loop Controller designed to reduce system error through a simple, efficient mathematical approach.
You may also appreciate Chapter 1 and 2 from controls-engineering-in-frc.pdf , which covers PIDs very well.
To get an an intuitive understanding about PIDs and feedback loops, it can help to start from scratch, and kind of recreating it from the basic assumptions and simple code.
Let's start from the core concept of "I want this system to go to a position and stay there".
Initially, you might simply say "OK, if we're below the target position, go up. If we're above the target position, go down." This is a great starting point, with the following pseudo-code.
setpoint= 15 //your target position, in arbitrary units
sensor= 0 //Initial position
if(sensor < setpoint){ output = 1 }
else if(sensor > setpoint){ output = -1 }
motor.set(output)
However, you might see a problem. What happens when setpoint and sensor are equal?
If you responded with "It rapidly switches between full forward and full reverse", you would be correct. If you also thought "This sounds like it might damage things", then you'll understand why this controller is named a "Bang-bang" controller, due to the name of the noises it tends to make.
Your instinct for this might be to simply not go full power. Which doesn't solve the problem, but reduces it's negative impacts. But it also creates a new problem. Now it's going to oscillate at the setpoint (but less loudly), and it's also going to take longer to get there.
So, let's complicate this a bit. Let's take our previous bang-bang, but split the response into two different regions: Far away, and closer. This is easier if we introduce a new term: Error. Error just represents the difference between our setpoint and our sensor, simplifying the code and procedure. "Error" helpfully is a useful term, which we'll use a lot.
run(()->{
setpoint= 15 //your target position, in arbitrary units
sensor= 0 //read your sensor here
error = setpoint-sensor
if (error > 5){ output = -1 }
else if(error > 0){ output = -0.2 }
else if(error < 0){ output = 0.2 }
else if(error < -5){ output = 1 }
motor.set(output)
})
We've now slightly improved things; Now, we can expect more reasonable responses as we're close, and fast responses far away. But we still have the same problem; Those harsh transitions across each else if. Splitting up into more and more branches doesn't seem like it'll help. To resolve the problem, we'd need an infinite number of tiers, dependent on how far we are from our targets.
With a bit of math, we can do that! Our error
term tells us how far we are, and the sign tells us what direction we need to go... so let's just scale that by some value. Since this is a constant value, and the resulting output is proportional to this term, let's call it kp: Our proportional constant.
run(()->{
setpoint= 15 //your target position, in arbitrary units
sensor= 0 //read your sensor here
kp = 0.1
error = setpoint-sensor
output = error*kp
motor.set(output)
)}
Now we have a better behaved algorithm! At a distance of 10, our output is 1. At 5, it's half. When on target, it's zero! It scales just how we want.
Try this on a real system, and adjust the kP until your motor reliably gets to your setpoint, where error is approximately zero.
In doing so, you might notice that you can still oscillate around your setpoint if your gains are too high. You'll also notice that as you get closer, your output drops to zero. This means, at some point you stop being able to get closer to your target.
This is easily seen on an elevator system. You know that gravity pulls the elevator down, requiring the motor to push it back up. For the sake of example, let's say an output of 0.2 holds it up. Using our previous kP of 0.1, a distance of 2 generates that output of 0.2. If the distance is 1, we only generate 0.1... which is not enough to hold it! Our system actually is only stable below where we want. What gives!
This general case is referred to as "standing error" ; Every loop through our PID fails to reduce the error to zero, which eventually settles on a constant value. So.... what if.... we just add that error up over time? We can then incorporate that error into our outputs. Let's do it.
setpoint= 15 //your target position, in arbitrary units
errorsum=0
kp = 0.1
ki = 0.001
run(()->{
sensor= 0 //read your sensor here
error = setpoint-sensor
errorsum += error
output = error*kp + errorsum*ki
motor.set(output)
}
The mathematical operation involved here is called integration, which is what this term is called. That's the "I" in PID.
In many practical FRC applications, this is probably as far as you need to go! P and PI controllers can do a lot of work, to suitable precision. This a a very flexible, powerful controller, and can get "pretty good" control over a lot of mechanisms.
This is probably a good time to read across the WPILib PID Controller page; This covers several useful features. Using this built-in PID, we can reduce our previous code to a nice formalized version that looks something like this.
PIDController pid = new PIDController(kP, kI, kD);
run(()->{
sensor = motor.getEncoder.getPosition();
motor.set(pid.calculate(sensor, setpoint))
})
A critical detail in good PID controllers is the iZone. We can easily visualize what problem this is solving by just asking "What happens if we get a game piece stuck in our system"?
Well, we cannot get to our setpoint. So, our errorSum gets larger, and larger.... until our system is running full power into this obstacle. That's not great. Most of the time, something will break in this scenario.
So, the iZone allows you to constrain the amount of error the controller actually stores. It might be hard to visualize the specific numbers, but you can just work backward from the math. If output = errorsum*kI
, then maxIDesiredTermOutput=iZone*kI
. So iZone=maxIDesiredTermOutput/kI
.
Lastly, what's the D in PID?
Well, it's less intuitive, but let's try. Have you seen the large spike in output when you change a setpoint? Give the output a plot, if you so desire. For now, let's just reason through a system using the previous example PI values, and a large setpoint change resulting in an error of 20.
Your PI controller is now outputting a value of 2.0 ; That's double full power! Your system will go full speed immediately with a sharp jolt, have a ton of momentum at the halfway point, and probably overshoot the final target. So, what we want to do is constrain the speed; We want it fast but not too fast. So, we want to reduce it according to how fast we're going.
Since we're focusing on error as our main term, let's look at the rate the error changes. When the error is changing fast we want to reduce the output. The difference is simply defined as error-previousError
, so a similar strategy with gains gives us output+=kP*(error-previousError)
.
This indeed gives us what we want: When the rate of change is high, the contribution is negative and large; Acting to reduce the total output, slowing the corrective action.
However, this term has another secret power, which disturbance rejection. Let's assume we're at a steady position, and the system is settled, and error=0
. Now, let's bonk the system downward, giving us a positive error. Suddenly nonzero-0
is positive, and the system generates a upward force. For this interaction, all components of the PID are working in tandem to get things back in place.
OK, that's enough nice things. Understanding PIDs requires knowing when they work well, and when they don't, and when they actually cause problems.
So, how do you make the best use of PIDs?
In other words, this is an error correction mechanism, and if you avoid adding error to begin with, you more effectively accomplish the motions you want. Throwing a PID at a system can get things moving in a controlled fashion, but care should be taken to recognize that it's not intended as the primary control handler for systems.
tags:
- stub
Requires:
Triggers
Hardware:
Sensing is interacting with physical objects, and changing robot behaviour based on it.
This can use a variety of sensors and methods, and will change from system to system
Often simple sensors like break beams or switches can tell you something very useful about the system state, which can help you set up a different, more precise sensor.
The most common application is in Homing such as a Elevator type systems. On boot, your your Encoder may not properly reflect the system state, and thus the elevator position is invalid. But, if you you have a switch at the end of travel you can use this to re-initialize your encoder, as the simple switch.
aliases:
- Relative Encoder
tags:
- stub
Requires:
Robot Code Basics
Absolute vs Relative encoders
Startup positioning
Slew Rate Limiting
An Encoder is a sensor that counts rotations.
tags:
- stub
aliases:
- Trigger
This is a numerical trick that can allow use of absolute encoders
Elevator
https://en.wikipedia.org/wiki/Chinese_remainder_theorem
Use two different scales
Compare
aliases:
- Elevator
tags:
- stub
Requires:
FeedForwards
PID
Reading Resources:
Homing
tags:
- stub
aliases:
- Intake
Requires:
SuperStructure Rollers
Sensing Basics
Recommends:
State Machines
Requires as needed:
SuperStructure Rollers
SuperStructure Elevator
SuperStructure Arm
Intake complexity can range from very simple rollers that capture a game piece, to complex actuated systems intertwined with other scoring mechanisms.
A common "over the bumper" intake archetype is a deployed system that
The speed of deployment and retraction both impact cycle times, forming a critical competitive aspect of the bot.
The automatic detection and retraction provide cycle advantages (streamlining the driver experience), but also prevent fouls and damage due to the collisions on the deployed mechanism.
tags:
- stub
Requires:
Sensing Basics
???
Covering system "state" is very useful, especially in subsystems
ConditionalCommand + SelectCommand can be useful for attributing actions and states on simple systems
Need to find a sensible formal way to cover it; It's easy to make "custom" state machines for simple systems, but hard to scale up in complexity with consistent patterns.
Consideration: Explain state machines here, as an explanation of how they're used and what they represent
Actually make it a workshop later.
aliases:
- Arm
- Pivot
tags:
- stub
Requires:
FeedForwards
PID
tags:
- stub
Requires:
Triggers
NetworkTables
tags:
- stub
Configure a NavX or gyro on the robot
Find a way to zero the sensor when the robot is enabled in auto
Create a command that tells you when the robot is pointed the same way as when it started
Print the difference between the robot's starting angle and current angle
TODO
what's an mxp
what port/interface to use, usb
which axis are you reading
tags:
- stub
Understand how to efficiently communicate to and from a robot for diagnostics and control
aliases:
- Homing
Homing is the process of recovering physical system positions on relative encoders.
SuperStructure Arm
SuperStructure Elevator
And will generally be done after most requirements for those systems
When a system is booted using Relative Encoders, the encoder boots with a value of 0, like you'd expect. However, the real physical system can be anywhere in it's normal range of travel, and the bot has no way to know the difference.
Homing is the process of reconciling this difference, this allowing your code to assert a known physical position, regardless of what position it was in when the system booted.
Homing is not a hard requirement of Elevator or Arm systems. As long as you boot your systems in known, consistent states, you operate without issue.
However, homing is generally recommended, as it provides benefits and safeguards
When looking at homing, the concept of a "Hard Stop" will come up a lot. A hard stop is simply a physical constraint at the end of a system's travel, that you can reliably anticipate the robot hitting without causing system damage.
In some bot designs, hard stops are free. In other designs, hard stops require some specific engineering design.
Any un-homed system has potential to perform in unexpected ways, potentially causing damage to itself or it's surroundings.
We'll gloss over this for now, but make sure to set safe motor current constraints by default, and only enable full power when homing is complete.
With this method, the consistency comes from the physical reset of the robot when first powering on the robot. Humans must physically set all non-homing mechanisms, then power the robot.
From here, you can do anything you would normally do, and the robot knows where it is.
This method is often "good enough", especially for testing or initial bringup. For some robots, gravity makes it difficult to boot the robot outside of the expected condition.
With this method, make sure your code does not reset encoder positions when initializing.
If you do, code resets or power loss will cause a de-sync between the booted position and the operational one. You have to trust the motor controller + encoder to retain positional accuracy.
Current detection is a very common, and reliable method within FRC. With this method, you drive the system toward a hard stop, and monitor the system current.
When the system hits the hard stop, the load on your system increases, requiring more power. This can be detected by polling for the motor current. When your system exceeds a specific current for a long enough time, you can assert that your system is homed!
Speed Detection works by watching the encoder's velocity. You expect that when you hit the hard stop, the velocity should be zero, and go from there. However, there's some surprises that make this more challenging than current detection.
Velocity measurements can be very noisy, so using a filter is generally required.
This method also suffers from the simple fact that the system velocity will be zero when homing starts. And zero is also the speed you're looking for as an end condition. You also cannot guarantee that the system speed ever increases above zero, as it can start against the hard stop.
As such, you can't do a simple check, but need to monitor the speed for long enough to assert that the system should have moved if it was able to.
Limit switches are a tried and true method in many systems. You simply place a physical switch at the end of travel; When the bot hits the end of travel, you know where it is.
Limit switches require notable care on the design and wiring to ensure that the system reliably contacts the switch in the manner needed.
The apparent simplicity of a limit switch hides several design and mounting considerations. In an FRC environment, some of these are surprisingly tricky.
Because of these challenges, limit switches in FRC tend to be used in niche applications, where use of hard stops is restricted. One such case is screw-driven Linear Actuators, which generate enormous amounts of force at very low currents, but are very slow and easy to mount things to.
Switches also come in multiple types, which can impact the ease of design. In many cases, a magnetic hall effect sensor is optimal, as it's non-contact, and easy to mount alongside a hard stop to prevent overshoot.
Most 3D printers use limit switches, allowing for very good demonstrations of the routines needed.
For designs where hard stops are not possible, consider a Roller Arm Limit Switch and run it against a CAM. This configuration allows the switch to be mounted out of the line of motion, but with an extended throw.
Index switches work similarly to Limit Switches, but the expectation is that they're in the middle of the travel, rather than at the end of travel. This makes them unsuitable as a solo homing method, but useful as an auxiliary one.
Index switches are best used in situations where other homing routines would simply take too long, but you have sufficient knowledge to know that it should hit the switch in most cases.
This can often come up in Elevator systems where the robot starting configuration puts the carriage far away from the nearest limit.
In this configuration, use of a non-contact switch is generally preferred, although a roller-arm switch and a cam can work well.
In some cases we can use absolute sensors such as Absolute Encoders or Range Finders to directly detect information about the robot state, and feed that information into our encoders.
This method works very effectively on Arm based systems; Absolute Encoders on an output shaft provide a 1:1 system state for almost all mechanical designs.
Elevator systems can also use these routines using |Range Finders , detecting the distance between the carriage and end of travel.
Clever designers can also use Absolute Encoders for elevators in a few ways
A relatively simple routine, but just running your system with a known minimum power for a set length of time can ensure the system gets into a known position. After the time, you can reset the encoder.
This method is very situational. It should only be used in situations where you have a solid understanding of the system mechanics, and know that the system will not encounter damage when ran for a set length of time.
In some cases you might be able to find the system home state (using gravity or another method), but find backlash is preventing you from hitting desired consistency and reliability.
This is most likely to be needed on Arm systems, particularly actuated Shooter systems. This is akin to a "calibration" as much as it is homing.
In these cases, homing routines will tend to find the absolute position by driving downward toward a hard stop. In doing so, this applies drive train tension toward the down direction. However, during normal operation, the drive train tension will be upward, against gravity.
This gives a small, but potentially significant difference between the "zero" detected by the sensor, and the "zero" you actually want. Notably, this value is not a consistent value, and wear over the life of the robot can impact it.
Similarly, in "no-homing" scenarios where you have gravity assertion, the backlash tension is effectively randomized.
To resolve this, backlash compensation then needs to run to apply tension "upward" before fully asserting a fully defined system state. This is a scenario where a time-based operation is suitable, as it's a fast operation, from a known state. The power applied should also be small, ideally a large value that won't cause actual motion away from your hard stop (meaning, at/below kS+kG ).
For an implementation of this, see CalibrateShooter from Crescendo.
Nominally, homing a robot is done once at first run, and from there you know the position. However, sometimes the robot has known mechanical faults that cause routine loss of positioning from the encoder's perspective. However, other sensors may be able to provide insight, and help correct the error.
This kind of error most typically shows up in belt or chain skipping.
To overcome these issues, what you can do is run some condition checking alongside your normal runtime code, trying to identify signs that the system is in a potentially incorrect state, and correcting sensor information.
This is best demonstrated with examples:
Online Position Recovery is a useful technique in a pinch. But, as with all other hardware faults, it's best to fix it in hardware. Use only when needed.
If the system is running nominally, these techniques don't provide much value, and can cause other runtime surprises and complexity, so it's discouraged.
In cases where such loss of control is hypothetical or infrequent, simply giving drivers a homing/button tends to be a better approach.
When doing homing, you typically have 4 system states, each with their own behavior. Referring it to it as a State Machine is generally simpler
The UnHomed state should be the default bootup state. This state should prepare your system to
It's often a good plan to have some way to manually trigger a system to go into the Unhomed state and begin homing again. This allows your robot drivers to recover from unexpected conditions when they come up. There's a number of ways your robot can lose position during operation, most of which have nothing to do with software.
The Homing state should simply run the desired homing strategy.
Modeling this sequence tends to be the tricky part, and a careless approach will typically reveal a few issues
The obvious takeaway is that however you home, you want it to be fast and ideally run in the Auto sequence. Working with your designers can streamline this process.
Use of the Command decorator withInterruptBehavior(...)
allows an easy escape hatch. This flag allows an inversion of how Command are scheduled; Instead of new commands cancelling running ones, this allows your homing command to forcibly block others from getting scheduled.
If your system is already operating on an internal state machine, homing can simply be a state within that state machine.
This state is easy: Your system can now assert the known position, clear your Homed state, apply updated power/speed constraints, resume normal operation.
Conveniently, the whole homing process actually fits very neatly into the Commands model, making for a very simple implementation
init()
represents the unhomed state and resetexecute()
represents the homing stateisFinished()
checks the system state and indicates completionend(cancelled)
can handle the homed procedureclass ExampleSubsystem extends SubsystemBase(){
SparkMax motor = ....;
private boolean homed=false;
ExampleSubsystem(){
motor.setMaxOutputCurrent(4); // Will vary by system
}
public Command goHome(){
return new FunctionalCommand(
()->{
homed=false;
motor.getAppliedCurrent()
};
()->{motor.set(-0.5);};
()->{return motor.getAppliedCurrent()>4}; //isFinished
(cancelled)->{
if(cancelled==false){
homed = true;
motor.setMaxOutputCurrent(30);
}
};
)
// Failsafe in case something goes wrong,since otherwise you
// can't exit this command by button mashing
.withTimeout(5)
//Prevent other commands from stopping this one
.withInterruptBehavior(kCancelIncoming);
}
}
This command can then be inserted at the start of autonomous, ensuring that your bot is always homed during a match. It also can be easily mapped to a button, allowing for mid-match recovery.
For situations where you won't be running an auto (typical testing and practice field scenarios), the use of Triggers can facilitate automatic checking and scheduling
class ExampleSubsystem extends SubsystemBase(){
ExampleSubsystem(){
Trigger.new(Driverstation::isEnabled)
.and(()->isHomed==false)
.onTrue(goHome())
}
}
Alternatively, if you don't want to use the withInterruptBehavior(...)
option, you can hijack other command calls with Commands.either(...)
or new ConditionalCommand(...)
class ExampleSubsystem extends SubsystemBase(){
/* ... */
//Intercept commands directly to prevent unhomed operation
public Command goUp(){
return Commands.either(
Commands.run(()->motor.0.5)
goHome(),
()->isHomed
}
/* ... */
tags:
- stub
Requires
Auto Differential
Gyro Sensing
tags:
- stub
requires
Auto Differential
Limelight Basics
The official radio documentation is complete and detailed, and should serve as your primary resource.
https://frc-radio.vivid-hosting.net/
However, It's not always obvious what you need to look up to get moving. Consider this document just a simple guide and jumping-off point to find the right documentation elsewhere
You don't! The Field Technicians at competitions will program the radio for competitions.
When configured for competition play, you cannot connect to the radio via wifi. Instead, use an ethernet cable, or
The home radio configuration is a common pain point
This option is the simplest: Just connect the robot via an ethernet or USB, and do whatever you need to do. For quick checks, this makes sense, but obviously is suboptimal for things like driving around.
The radio does have a 2.4ghz wifi hotspot, albeit with some limitations. This mode is suitable for many practices, and is generally the recommended approach for most every-day practices due to ease of use.
Note, this option requires access to the tiny DIP switches on the back of the radio! You'll want to make sure that your hardware teams don't mount the radio in a way that makes this impossible to access.
This option uses a second radio to connect your laptop to the robot. This is the most cumbersome and limited way to connect to a robot, and makes swapping who's using the bot a bit more tricky.
However, this is also the most performant and reliable connection method. This is recommended when doing extended driving sessions, final performance tuning, and other scenarios where you're trying to simulate competition-ready environments.
This option has a normal robot on one end, and your driver-station setup will look the following image. See https://frc-radio.vivid-hosting.net/overview/practicing-at-home for full setup directions
Port forwarding allows you to bridge networks across different interfaces.
The practical application in FRC is being able to access network devices via the USB interface! This is mostly useful for quickly interfacing with Vision hardware like the Limelight or Photonvision at competitions.
//Add in the constructor in Robot.java or RobotContainer.java
// If you're using a Limelight
PortForwarder.add(5800, "limelight.local", 5800);
// If you're using PhotonVision
PortForwarder.add(5800, "photonvision.local", 5800);
The radio has some scriptable interfaces, allowing programmatic access to quickly change or read settings.
Understand the typical Git operations most helpful for day-to-day programming
This module is intended to be completed alongside other tasks.
main
branch, and create a new branch to represent that card.In general
Git is a "source control" tool intended to help you manage source code and other text data.
Git has many superpowers, but the basic level provides "version control"; This allows you to create "commits", which allow you to capture your code's state at a point in time. Once you have these commits, git lets you go back in time, compare to what you've done, and more.
Fundamental to Git is the concept of a "difference", or a diff for short. Rather than just duplicating your entire project each time you want to make a commit snapshot, Git actually just keeps track of what you've changed.
In a simplified view, updating this simple subsystem
/**Example class that does a thing*/
class ExampleSubsystem extends SubsystemBase{
private SparkMax motor = new SparkMax(1);
ExampleSubsystem(){}
public runMotor(){
motor.run(1);
}
public stop(){/*bat country*/}
public go(){/*fish*/}
}
to this
/**Example class that does a thing*/
class ExampleSubsystem extends SubsystemBase{
private SparkMax motor = new SparkMax(1);
private Encoder encoder = new Encoder();
ExampleSubsystem(){}
public runMotor(double power){
motor.run(power);
}
public stop(){/*bat country*/}
public go(){/*fish*/}
}
would be stored in Git as
class ExampleSubsystem extends SubsystemBase{
private SparkMax motor = new SparkMax(1);
+ private Encoder encoder = new Encoder();
ExampleSubsystem(){}
- public runMotor(1){
- motor.run(1);
+ public runMotor(double power){
+ motor.run(power);
}
public stop(){/*bat country*/}
With this difference, the changes we made are a bit more obvious. We can see precisely what we changed, and where we changed it.
We also see that some stuff is missing in our diff: the first comment is gone, and we don't see go or our closing brace. Those didn't change, so we don't need them in the commit.
However, there are some unchanged lines, near the changed lines. Git refers to these as "context". These help Git figure out what to do in some complex operations later. It's also helpful for us humans just taking a casual peek at things. As the name implies, it helps you figure out the context of that change.
We also see something interesting: When we "change" a line, Git actually
Now that we have some changes in place, we want to "Commit" that change to Git, adding it to our project's history.
A commit in git is a just a bunch of changes, along with some extra data. The most relevant is
These commits form a sequence, building on top from the earliest state of the project. We generally assign a name to these sequences, called "branches".
A typical project starts on the "main" branch, after a few commits, you'll end up with a nice, simple history like this.
It's worth noting that a branch really is just a name that points to a commit, and is mostly a helpful book-keeping feature. The commits and commit chain do all the heavy lifting. Basically anything you can do with a branch can be done with a commit's hash instead!
We're now starting to get into Git's superpowers. You're not limited to just one branch. You can create new branches, switch to them, and then commit, to create commit chains that look like this:
Here we can see that mess for qual 4
and mess for qual 8
are built off the main
branch, but kept as part of the competition
branch. This means our main
branch is untouched. We can now switch back and forth using git switch main
and git switch competition
to access the different states of our codebase.
We can, in fact, even continue working on main
adding commits like normal.
Being able to have multiple branches like this is a foundational part of how Git works, and a key detail of it's collaborative model.
However, you might notice the problem: We currently can access the changes in competition
or main
, but not both at once.
Merging is what allows us to do that. It's helpful to think of merging the changes from another branch into your current branch.
If we merge competition
into main
, we get this. Both changes ready to go! Now main can access the competition
branch's changes.
However, we can equally do main
into competition
, granting competition
access to the changes in main.
Now that merging is a tool, we have unlocked the true power of git. Any set of changes is built on top of eachother, and we can grab changes without interrupting our existing code and any other changes we've been making!
This feature powers git's collaborative nature: You can pull in changes made by other people just as easily as you can your own. They just have to have the same parent somewhere up the chain so git can figure out how to step through the sequence of changes.
Git is a distributed system, and as such has a few different places that all these changes can live.
The most apparent one is your actual code on your laptop, forming the workspace. As far as you're concerned, this is just the files in the directory. However, Git sees them as the culmination of all changes committed in the current branch, plus any uncommitted changes.
The next one is "staging": This is just the incomplete next commit, and holes all the changes you've added as part of it. Once you properly commit these changes, your staging will be cleared, and you'll have a new commit in your tree.
It basically looks like this:
Next is a "remote", representing a computer somewhere else. In most Git work, this is just Github. There's several commands focused on interacting with your remote, and this just facilitates collaborative work and offsite backup.
class ExampleSubsystem extends SubsystemBase{
private SparkMax motor = new SparkMax(1);
+ private Encoder encoder = new Encoder();
ExampleSubsystem(){}
- public runMotor(1){
- motor.run(1);
+ public runMotor(double power){
+ motor.run(power);
}
public stop(){/*bat country*/}
git init: This creates a new git repository for your current project. You want to run this in the base
git add `
There's a lot of tools that interact with your Git repository, but it's worth being mindful about which ones you pick! Many tools do unexpected
Recommended:
This guide runs through how to examine a robot design, analyze the mechanics, game piece path, and form a plan to generate a code structure to control the system.
For an FRC bot, "move the game piece" is the fundamental design objective, and serves as a great way to step through the bot.
Being able to identify basic mechanisms is key to being able to model a robot in code. This non-exhaustive list should help provide some vocabulary for the analysis on typical bots.
Rollers The simplest mechanical system: A motor and a shaft that spins.
Flywheel A specialized Roller system with extra weight, intended to maintain a speed when launching objects.
Indexer A mechanism to precisely align, prepare, or track game pieces internally in the robot. Often a Roller, but can be very complex.
Shooter A compound system, usually consisting of at least a Flywheel and an Indexer, and sometimes an Arm or other aiming structure.
Intake A specialized compound system intended for getting new game pieces into the robot. Generally consists of a Roller, often with another positioning device.
Arm A system that rotates around a pivot point. Usually positions another subsystem.
Elevator A system that travels along a linear track of some sort. Typically up/down, hence the name.
Swerve Drive or Drivetrain: Makes robot go whee along the ground.
Crescendo Bot Code
Note The actual code for this bot may differ from this breakdown; This is based on the initial design provided, not the final version after testing.
The game piece for this game is an 2" tall, 14" diameter orange donut called a "note", and will be referenced throughout this breakdown.
There are two note scoring objectives: Shooting into an open "speaker" slot about 8" high, or placing into an "amp", which is a 2" wide slot about 24" off the ground.
Lastly, climbing is an end game challenge, with an additional challenge of the "trap", which is effectively score an amp while performing a climb.
For this, we'll start with the game piece (note) path, and just take note of what control opportunities we have along this path.
The note starts on the floor, and hits the under-bumper intake. This is a winding set of linked rollers driven by a single motor. This system has rigid control of the note, ensuring a "touch it own it" control path.
Indexer: The game piece is then handed off to an indexer (or "passthrough"). This system is two rollers + motors above and below the note path, and have a light hold on the note; Just enough to move it, but not enough to fight other systems for physical control.
Flywheel: The next in line is a Flywheel system responsible for shooting. This consists of two motors (above and below the note's travel path), and the rollers that make physical contact. When shooting, this is the end of game piece path. This has a firm grip to impart significant momentum quickly.
Dunkarm + DunkarmRollers: When amp scoring/placing notes, we instead hand off to the rollers in front of the shooter. These rollers are mounted on an arm, which move the rollers out of the way of the shooter, or can move it upward.
Shooter: The Indexer and Shooter are mounted on a pivoting arm, which we denote as the shooter. This allows us to set the note angle.
Climber: The final mechanism is the climber. There's two climber arms, each with their own motor.
The indexer has a single LaserCan rangefinder, located a just before the shooter. This will allow an analog view of the note position in the system.
Again let's follow the standard game piece and flow path.
This seems to be about it for conflicts between control systems
We should also do a quick check of the hard stops; These serve as reference points and physical constraints.
Before getting into how the code is structured, let's decide what the code should be doing during normal gameplay cycles
We can now start looking at how to structure the code to make this robot happen. Having a good understanding of Command flow helps here.
We'll start with subsystems breakdowns. Based on the prior work, we know there's lots of loose coupling: Several subsystems are needed for multiple different actions, but nothing is strongly linked. The easy ones are:
The Dunkarm + Dunkarm rollers is less clear. From an automation perspective, we could probably combine these. But the humans will want to have separate buttons for "put arm in position" and "score the note". To avoid command sequence conflicts, we'd want these separate.
Next we define what the external Command API for each subsystem should look like so we can manipulate them.
Intake:
Flywheel:
Shooter Pivot
Dunkarm:
Dunkarm Rollers:
Climber:
Indexer