Introductory Topics
Standard Topics
Special Courses
Capstone Topics

Success Criteria

  • Install the WPILib VS Code IDE
  • Make a new robot project
  • Create a new subsystem
  • Install the Rev third Library
  • Basic requirements to start working on robot projects
  • Create a new empty subsystem
  • Create a new empty command
  • Add your new command and subsystem to RobotContainer.

Robot Code structure

When you open a new robot project you'll see a lot of files we'll interact with.

  • src/main
    • deploy
    • java
    • frc/frc/robot
      • commands
        • ExampleCommand.java
      • subsystems
        • ExampleSubsystem.java
      • Constants.java
      • Main.java
      • Robot.java
      • RobotContainer.java
  • vendordeps

For typical projects, you'll be spending the most time in RobotContainer, subsystems, and occasionally commands.

For some early practice projects, or special use cases you might also interact with Robot.java a bit.

Third Party Libraries

Many helpful utilities we'll use for robot projects are represented using code that's not available by default. WPILib has a small manager to assist with installing these, detailed here:

Third Party Tools

We'll also utilize a number of software tools for special interactions with hardware or software components. Some of these include

Putting Things in the Proper Place

The hardest part of getting started with robots is figuring out where your robot code goes.

Robot.java : A microcausm of the complete robot

Robot.java is a very powerful file, and it's possible to write your entire robot in just this one file! For reasons we'll get into, we do not want to do this. However, the setup of it does a good job explaining how a robot works. Let's look at the structure of this file for now

public class Robot extends TimedRobot {
	private Command m_autonomousCommand;
	private final RobotContainer m_robotContainer;
	public Robot() {
		m_robotContainer = new RobotContainer();
	}
	
	public void robotPeriodic() {}
	
	public void disabledInit() {}
	public void disabledPeriodic() {}
		
	public void autonomousInit() {}
	public void autonomousPeriodic() {}
	
	public void teleopInit() {}
	public void teleopPeriodic() {}
	
	public void testInit() {}
	public void testPeriodic() {}
	
	//a few more ignored bits for now
}

From the pairing, we can group these into several different modes

  • "Robot"
  • Autonomous
  • Teleop
  • Test

Indeed, if we look at our Driver Station, we see several modes mentioned.
driverstation.jpg
Teleop, Auto, and Test are simply selectable operational modes. However, you might want to utilize each one slightly differently.

"Practice mode" is intended to simulate real matches: This DriverStation mode runs Autonomous mode for 15 seconds, and then Teleop Mode for the remainder of a match time.

"Disabled" mode is automatically selected whenever the robot is not enabled. This includes when the robot boots up, as well as as whenever you hit "disabled" on the driver station.
Disabled mode will also cancel any running Commands .

"Robot mode" isn't an explicit mode: Instead, of "Robot Init", we just use the constructor: It runs when the robot boots up. In most cases, the primary task of this is to set up Robot Container.
robotPeriodic just runs every loop, regardless of what other loop is also running.

We can also see a grouping of

  • Init
  • Periodic
    Whenever any new "mode" starts, we first run the Init function once, and then we run the periodic. The robot will continue run associated Periodic functions every loop, 50 times per second.

We generally won't add much code in Robot.java, but understanding how it works is a helpful starting point to understanding the robot itself.

RobotContainer.java

As mentioned above, the "Robot Container" is created as the robot boots up. When you create a new project,
This file contains a small number of functions and examples to help you organized.

public class RobotContainer(){
	ExampleSubsystem subsystem = new ExampleSubsystem();
	ExampleCommand command = new ExampleCommand();
	CommandXboxJoystick joystick = new CommandXboxJoystick(0);
	RobotContainer(){
		configureBindings();
	}
	public void configureBindings(){
		//Not a special function; Just intended to help organize 
	}
	public Command getAutonomousCommand(){/*stuff*/}
}

This file introduces a couple new concepts

  • Commands, which form the "actions" you want the robot to perform
  • Subsystems, or the different parts of the robot that could perform actions
  • Joysticks , which serve as the standard input method.

The use of Commands and Subsystems goes a long way to managing complex robot interactions across many subsystems. However, they're certainly tricky concepts to get right off the bat.

Constants.java

Sometimes, you'll have oddball constants that you need to access in multiple places in your code. Constants.java advertises itself as a place to sort and organize files.

Without getting too into the "why", in general you should minimize use of Constants.java; It leads to several problems as your robot complexity increases.

Instead, simply follow good practices for scope encapsulation, and keep the constants at the lowest necessary scope.

  • If a value is used once, just use a value. This covers a lot of setup values like PID tuning values.
  • If your value is used repeatedly inside a subsystem, make it a private constant in that subsystem. This is common for conversion factors, min/max values, or paired output values
  • If a constant is strongly associated with a subsystem, but needs to be referenced elsewhere, make it a public constant in that subsystem.
  • Lastly, if something is not associated with a subsystem, and used repeatedly across multiple subsystems, constants.java is the place.

If you find yourself depending on a lot of constants, you might need to consider Refactoring your code a bit to streamline things. Note that Stormbots code has almost nothing in here!

Robot Code Basics
Coding Basics.canvas

Requires:Robot Code Basics
Recommends:Commands

Success Criteria

  • Spin a motor
  • Configure a motor with max current
  • Control on/off via joystick button
  • Control speed via joystick

Setup and prep

Learning order

This curriculum does not require or assume the Command structure; It's just about spinning motors.
However, it's recommended to learn Motor Control alongside or after Commands, as we'll use them for everything afterwards anyway.

Rev Lib

This documentation assumes you have the third party Rev Library installed. You can find instructions here.
https://docs.wpilib.org/en/latest/docs/software/vscode-overview/3rd-party-libraries.html

Wiring and Electrical

This document also assumes correct wiring and powering of a motor controller. This should be the case if you're using a testbench.

Reference Implementation

// Robot.java
public Robot extends TimedRobot{

	public void teleopPeriodic(){
		
	}
}
Motor Control

Success Criteria

  • Create a command that runs indefinitely
  • Have that command start+end on a joystick button
  • Create a command that starts on a joystick press, and stop it with a different button
  • Create a default command that lets you know when it's running through Telemetry
  • Create a runCommand using a function pointer
  • Create a runCommand using a lambda
Learning order

You can learn this without having done Motor Control, but it's often more fun to learn alongside it in order to have more interesting, visual commands while experimenting.
The commands provided as an example just print little messages visible in the RioLog, allowing this to be set up without motors

What is a command

A Command is an event driven code structure that allows you manage when code runs, what resources it uses, and when it ends.

In the context of a robot, it allows you to easily manage a lot of the complexity involved with managing multiple Subsystems

The code structure itself is fairly straightforward, and defines a few methods; Each method defines what code runs at what time.

class ExampleCommand extends CommandBase{
	public ExampleCommand(){}
	public void initialize(){}
	public void execute(){}
	public boolean isFinished(){ return false; }
	public void end(boolean cancelled){}
}

Behind the scenes, the robot runs a command scheduler, which helps manage what runs when. Once started, a command will run according to the following flowchart, more formally known as a state machine.

false

true

initialize

execute

isFinished

end

This is the surface level complexity, which sets you up for how to view, read, and write commands.

Requirements and resources

A key aspect of Commands is their ability to claim temporary, exclusive ownership over a Subsystem . This is done by passing the subsystem into a command, and then adding it as a requirement

class ExampleCommand extends CommandBase{
	public ExampleCommand(ExampleSubsystem subsystemName){
		addRequirements(subsystemName);
	}

Now, whenever the command is started, it will forcibly claim that subsystem. It'll release that claim when it runs it's end() block.

This ability of subsystems to hold a claim on a resource has a lot of utility. The main value is in preventing you from doing silly things like trying to tell a motor to go forward and backward at once.

Events and interruptions

Now that we've established subsystem ownership, what happens when you do try to tell your motor to go forward and then backward?

When you start the command, it will forcibly interrupt other commands that share a resource with it, ensuring that the new command has exclusive access.

It'll look like this

false

true

initialize

execute

isFinished

end

existing commands that share subsystems are cancelled here

subsystems are released here.

When a command is cancelled, the command scheduler runs the commands end(cancelled) block, passing in a value of true. Whole not typical, some commands will need to do different cleanup routines depending on whether they exited on a task completion, or if something else kicks em off a subsystem.

Starting and Stopping Commands

Commands can be started in one of 3 ways:

  • via a Trigger's start condition
  • Directly scheduling it via the command's .schedule() method.
  • Automatically as a DefaultCommand

They can be stopped via a few methods

  • When the command returns true from it's isFinished() method
  • When launched by a Trigger, and the run condition is no longer met
  • Calling a command's .cancel() method directly
  • When the command is cancelled by a new command that claims a required subsystem.

Default Commands

It's often the case that a subsystem will have a clear, preferred action when nothing else is going on. In some cases, it's stopping a spinning roller, intake, or shooter. In others it's retracting an intake. Maybe you want your lights to do a nice idle pattern. Maybe you want your chassis joystick to just start when the robot does.

Default commands are ideal for this. Default commands run just like normal commands, but are automatically re-started once nothing else requires the associated subsystem resource.

Just like normal command, they're automatically stopped when the robot is disable, and cancelled when something else requires it.
Unlike normal commands, it's not allowed to have the command return true from isFinished(). The scheduler expects default commands to run until they're cancelled.

Also unlike other commands, a subsystem must require the associated subsystem, and cannot require other subsystems.

Command groups + default commands

It's worth making a note that a Default Command cannot start during a Command Group that contains a command requiring the subsystem! If you're planning complex command sequences like an auto, make sure they don't rely on DefaultCommands as part of their operation.

When to require

As you're writing new subsystems, make sure you consider whether you should require a subsystem.

You'll always want to require subsystems that you will modify, or otherwise need exclusive access to. This commonly involves commands that direct a motor, change settings, or something of that sort.

In some cases, you'll have a subsystem that only reads from a subsystem. Maybe you have an LED subsystem, and want to change lights according to an Elevator subsystems's height.
One way to do this is have a command that requires the LEDs (needs to change the lights), but does not require the Elevator (it's just reading the encoder).

As a general rule, most commands you write will simply require exactly one subsystem. Commands that need to require multiple subsystems can come up, but typically this is handled by command composition and command groups.

External Commands

Every new project will have an example command in a dedicated file, which should look familiar

class ExampleCommand extends CommandBase{
	public ExampleCommand(){
		//Runs once when the command is created as the robot boots up.
		//Register required subsystems, if appropriate
		//addRequirements(subsystem1, subsystem2...);
	}
	public void initialize(){
		//Runs once when the command is started/scheduled
	}
	public void execute(){
		//Runs every code loop
	}
	public boolean isFinished(){
		//Returns true if the command considers it's task done, and should exit
		return false;
	}
	public void end(boolean cancelled){
		//Perform cleanup; Can do different things if it's cancelled
	}
}

This form of command is mostly good for instructional purposes while you're getting started.

On more complex robot projects, trying to use the file-based Commands forces a lot of mess in your Subsystems; In order for these to work, you need to make many of your Subsystem details public, often requiring you to make a bunch of extra functions to support them.

Command Factories

Command factories are the optimal way to manage your commands. With this convention, you don't create a separate Command files, but create methods in your Subsystem that build and return new Command objects. This convention is commonly called a "Factory" pattern.
Here's a short example and reference layout:

//In your subsystem
Roller extends SubsystemBase{
	Roller(){}

	public Command spinForward(){
		return Commands.run(()->{
			System.out.println("Spin Forward!!");
		},this);
	}
}
//In your robotContainer, let's create a copy of that command
RobotContainer{
	RobotContainer(){
		joystick.a().whileTrue(roller.spinForward());
	}
}

That's it! Not a lot of code, but gives you a flexible base to start with.

This example uses Commands.run() one of the many options in the Commands Class. These command shortcuts let you provide Lambdas representing some combination of a Command's normal Initialize, Execute, isFinished, or End functions. A couple notable examples are

  • Commands.run : Takes a single lambda for the Execute blocks
  • Commands.startRun : Takes two lambdas for the Initialize and Execute blocks
  • Commands.startEnd : Takes two lambdas for the Initialize and End Blocks

Most commands you'll write can be written like this, making for simple and concise subsystems.

Watch the Requires

Many Commands helpers require you to provide the required subsystem after the lambdas. If you forget, you can end up with multiple commands fighting to modify the current subsystem

Building on the above, Subsystems have several of these command helpers build in! You can see this.startRun(...), this.run(..) etc; These commands work the same as the Commands. versions, but automatically include the current subsystem.

There's a notable special case in new FunctionalCommand(...), which takes 4 lambdas for a full command, perfectly suitable for those odd use cases.

Command Composition

The real power of commands comes from the Command Compositions , and "decorator" functions. These functions enable a lot of power, allowing you to change how/when commands run, and pairing them with other commands for complex sequencing and autos.

For now, let's focus on the two that are more immediately useful:

  • command.withTimeout(time) , which runs a command for a set duration.
  • command.until(()->someCondition) , which allows you to exit a command on things like sensor inputs.

Commands also has some helpful commands for hooking multiple commands together as well. The most useful is a simple sequence.

Commands.sequence(
	roller.spinForward().withTimeout(0.1),
	roller.spinBackward().withTimeout(0.1),
	roller.spinForward().withTimeout(0.5)
)
Commands

Success Criteria

  • Create a Differential Drivetrain
  • Configure a Command to operate the drive using joysticks
  • ??? Add rate limiting to joysticks to make the system control better
  • ??? Add constraints to rotation to make robot drive better
Differential Drive

Requires:
Motor Control

Recommends:
FeedForwards
PID

Success Criteria

  • Create a Roller subsystem
  • Calibrate the system to use RPM
  • Create Commands for running forward and backwards using the target RPM
  • Bind them to commands to move the game piece forward or backward
  • Create a default command that stops the subsystem
SuperStructure Rollers

Success Criteria

  • Create a simple autonomous that drives forward and stops
  • Create a two-step autonomous that drives forward and backward
  • Create a four step autonomous that drives forward, runs a mock "place object" command, backs up, then turns around.
Auto Differential

Requires:
Motor Control

Success Criteria

  • Create a velocity FF for a roller system that enables you to set the output in RPM
  • Create a gravity FF for a elevator system that holds the system in place without resisting external movement
  • Create a gravity FF for an arm system that holds the system in place without resisting external movement

Synopsis

Feedforwards model an expected motor output for a system to hit specific target values.
The easiest example is a motor roller. Let's say you want to run at ~3000 RPM. You know your motor has a top speed of ~6000 RPM at 100% output, so you'd correctly expect that driving the motor at 50% would get about 3000 RPM. This simple correlation is the essence of a feed-forward. The details are specific to the system at play.

Explanation

The WPILib docs have good fundamentals on feedforwards that is worth reading.
https://docs.wpilib.org/en/stable/docs/software/advanced-controls/controllers/feedforward.html

Tuning Parameters

Feed-forwards are specifically tuned to the system you're trying to operate, but helpfully fall into a few simple terms, and straightforward calculations. In many cases, the addition of one or two terms can be sufficient to improve and simplify control.

kS : Static constant

The simplest feedforward you'll encounter is the "static feed-forward". This term represents initial momentum, friction, and certain motor dynamics.

You can see this in systems by simply trying to move very slow. You'll often notice that the output doesn't move it until you hit a certain threshhold. That threshhold is approximately equal to kS.

The static feed-forward affects output according to the simple equation of

kG : Gravity constant

a kG value effectively represents the value needed for a system to negate gravity.

Elevators are the simpler case: You can generally imagine that since an elevator has a constant weight, it should take a constant amount of force to hold it up. This means the elevator Gravity gain is simply a constant value, affecting the output as ; You don't need any other considerations regarding the system motion, because gravity is always constant.

A more complex kG calculation is needed for pivot or arm system. You can get a good sense of this by grabbing a heavy book, and holding it at your side with your arm down. Then, rotate your arm outward, fully horizontal. Then, rotate your arm all the way upward. You'll probably notice that the book is much harder to hold steady when it's horizontal than up or down.

The same is true for these systems, where the force needed to counter gravity changes based on the angle of the system. To be precise, it's maximum at horizontal, zero when directly above or below the pivot. Mathematically, it follows the function ratio, lending this version of the feed-forward the nickname kCos.

This form of the gravity constant affects the output according to
, where is is the maximum output, at horizontal. [1]

kV : Velocity constant

The velocity feed-forward represents the expected output to maintain a target velocity. This term accounts for physical effects like dynamic friction and air resistance, and a handful of

This is most easily visualized on systems with a velocity goal state. In that case, is easily known, and contributes to the output as .

In contrast, for positional control systems, knowing the desired system velocity is quite a challenge. In general, you won't know the target velocity unless you're using a Motion Profiles to to generate the instantaneous velocity target.

kA : Acceleration constant

The acceleration feed-forward largely negates a few inertial effects. It simply provides a boost to output to achieve the target velocity quicker.

like , is typically only known when you're working with Motion Profiles.

The equations of FeedForward

Putting this all together, it's helpful to de-mystify the math happening behind the scenes.

The short form is just a re-clarification of the terms and their units
: Output to overcome gravity ()
: Output to overcome static friction ()
: Output per unit of target velocity ()
: Output per unit of target acceleration ()

A roller system will often simply be
If you don't have a motion profile, kA will simply be zero, and and kS might also be negligible unless you plan to operate at very low RPM.

An elevator system will look similar:
Without a motion profile, you cannot properly utilize kV and kA, which simplifies down to
where is generally derived by (since you know the current and previous positions).

Lastly, elevator systems differ only by the cosine term to scale kG.
Again simplifying for systems with no motion profile, you get
It's helpful to recognize that because the angle is being fed to a function, you cannot use degrees here! Make sure to convert.

Of course, the intent of a feed-forward is to model your mechanics to improve control. As your system increases in complexity, and demands for precision increase, optimal control might require additional complexity! A few common cases:

  • If you have a pivot arm that extends, your kG won't be constant!
  • Moving an empty system and one loaded with heavy objects might require different feed-forward models entirely.
  • Long arms might be impacted by motion of systems they're mounted on, like elevators or the chassis itself! You can add that in and apply corrective forces right away.

Feed-forward vs feed-back

Since a feed-forward is prediction about how your system behaves, it works very well for fast, responsive control. However, it's not perfect; If something goes wrong, your feed-forward simply doesn't know about it, because it's not measuring what actually happens.

In contrast, feed-back controllers like a PID are designed to act on the error between a system's current state and target state, and make corrective actions based on the error. Without first encountering system error, it doesn't do anything.

The combination of a feed-forward along with a feed-back system is the power combo that provides robust, predictable motion.

FeedForward Code

WPILib has several classes that streamline the underlying math for common systems, although knowing the math still comes in handy! The docs explain them (and associated warnings) well.
https://docs.wpilib.org/en/stable/docs/software/advanced-controls/controllers/feedforward.html

Integrating in a robot project is as simple as crunching the numbers for your feed-forward and adding it to your motor value that you write every loop.

ExampleSystem extends SubsystemBase(){

	SparkMax motor = new SparkMax(...)
	// Declare our FF terms and our object to help us compute things.
	double kS = 0.0;
	double kG = 0.0;
	double kV = 0.0;
	double kA = 0.0;
	ElevatorFeedforward feedforward = new ElevatorFeedforward(kS, kG, kV, kA);
	
	ExampleSubsystem(){}

	Command moveManual(double percentOutput){
		return run(()->{
			var output ;
			//We don't have a motion profile or other velocity control
			//Therefore, we can only assert that the velocity and accel are zero
			output = percentOutput+feedforward.calculate(0,0);
			// If we check the math, this feedforward.calculate() thus 
			// evaluates as simply kg;
			
			// We can improve this by instead manually calculating a bit
			// since we known the direction we want to move in
			output = percentOutput + Math.signOf(percentOutput) + kG;
			motor.set(output);
		})
	}

	Command movePID(double targetPosition){
		return run(()->{
			//Same notes as moveManual's calculations 
			var feedforwardOutput = feedforward.calculate(0,0);
			// When using the Spark closed loop control, 
			// we can pass the feed-forward directly to the onboard PID
			motor
			.getClosedLoopController()
			.setReference(
				targetPosition,
				ControlType.kPosition,
				ClosedLoopSlot.kSlot0,
				feedforwardOutput, 
				ArbFFUnits.kPercentOut
			);
			//Note, the ArbFFUnits should match the units you calculated!
		})
	}

	Command moveProfiled(double targetPosition){
		// This is the only instance where we know all parameters to make 
		// full use of a feedforward.
		// Check [[Motion Profiles]] for further reading
	}
	
}

Finding Feed-Forward Gains

High gains

When tuning feed-forwards, it's helpful to recognize that values being too high will result in notable problems, but gains being too low generally result in lower performance.
Just remember that the lowest possible value is 0; Which is equivalent to not using that feed forward in the first place. Can only improve from there.

Finding kS and kG

These two terms are defined at the boundary between "moving" and "not moving", and thus are closely intertwined. Or, in other words, they interfere with finding the other. So it's best to find them both at once.

It's easiest to find these with manual input, with your controller input scaled down to give you the most possible control.

Start by positioning your system so you have room to move both up and down. Then, hold the system perfectly steady, and increase output until it just barely moves upward. Record that value.
Hold the system stable again, and then decrease output until it just barely starts moving down. Again, record the value.

Thinking back to what each term represents, if a system starts moving up, then the provided input must be equal to ; You've negated both gravity and the friction holding it in place. Similarly, to start moving down, you need to be applying . This insight means you can generate the following two equations

Helpfully, for systems where like roller systems, several terms cancel out and you just get .

For pivot/arm systems, this routine works as described if you can calculate kG at approximately horizontal. It cannot work if the pivot is vertical. If your system cannot be held horizontal, you may need to be creative, or do a bit of trig to account for your recorded being decreased by

Importantly, this routine actually returns a kS that's often slightly too high, resulting in undesired oscillation. That's because we recorded a minimum that causes motion, rather than the maximum value that doesn't cause motion. Simply put, it's easier to find this way. So, we can just compensate by reducing the calculated kS slightly; Usually multiplying it by 0.9 works great.

Finding roller kV

Because this type of system system is also relatively linear and simple, finding it is pretty simple. We know that , and expect .

We know is going to be constrained by our motor's maximum RPM, and that maxOutput is defined by our api units (either +/-1.0 for "percentage" or +/-12 for "volt output").

This means we can quickly assert that should be pretty close to .

Finding kV+Ka

Beyond roller kV, kA and kV values are tricky to identify with simple routines, and require Motion Profiles to take advantage of. As such, they're somewhat beyond the scope of this article.

The optimal option is using System Identification to calculate the system response to inputs over time. This can provide optimal, easily repeatable results. However, it involves a lot of setup, and potentially hazardous to your robot when done without caution.

The other option is to tune by hand; This is not especially challenging, and mostly involves a process of moving between goal states, watching graphs, and twiddling numbers. It usually looks like this:

  • Identify two setpoints, away from hard stops but with sufficient range of motion you can hit target velocities.
  • While cycling between setpoints, ihen increase kV until the system generates velocities that match the target velocities. They'll generally lag behind during the accelleration phase.
  • Then, increase kA until the accelleration shifts and the system perfectly tracks your profile.
  • Increase profile constraints and and repeat until system performance is attained. Starting small and slow prevents damage to the mechanics of your system.

This process benefits from a relatively low P gain, which helps keep the system stable. Once your system is tuned, you'll probably want a relatively high P gain, now that you can assert the feed-forward is keeping your error close to zero.


  1. Note, you might observe that the kCos output, is reading the current system state, and say "hey! That's a feed back system, not a feed forward!" and you are technically correct; the best kind of correct. However, kCos is often implemented this way, as it's much more stable than the feed-forward version. In that version, you apply , regardless of what happens to actually be. Feel free to do a thought experiment on how this might present problems in real-world systems.↩︎

FeedForwards

Success Criteria

  • Create a PID system on a test bench
  • Tune necessary PIDs using encoders
  • Set a velocity using a PID
  • Set a angular position using a PID
  • Set a elevator position using a PID
  • Plot the system's position, target, and error as you command it.

TODO

Synopsis

A PID system is a Closed Loop Controller designed to reduce system error through a simple, efficient mathematical approach.

You may also appreciate Chapter 1 and 2 from controls-engineering-in-frc.pdf , which covers PIDs very well.

Deriving a PID Controller from scratch

To get an an intuitive understanding about PIDs and feedback loops, it can help to start from scratch, and kind of recreating it from the basic assumptions and simple code.

Let's start from the core concept of "I want this system to go to a position and stay there".

Initially, you might simply say "OK, if we're below the target position, go up. If we're above the target position, go down." This is a great starting point, with the following pseudo-code.

setpoint= 15  //your target position, in arbitrary units
sensor= 0 //Initial position
if(sensor < setpoint){ output = 1 }
else if(sensor > setpoint){ output = -1 }
motor.set(output)

However, you might see a problem. What happens when setpoint and sensor are equal?

If you responded with "It rapidly switches between full forward and full reverse", you would be correct. If you also thought "This sounds like it might damage things", then you'll understand why this controller is named a "Bang-bang" controller, due to the name of the noises it tends to make.

Your instinct for this might be to simply not go full power. Which doesn't solve the problem, but reduces it's negative impacts. But it also creates a new problem. Now it's going to oscillate at the setpoint (but less loudly), and it's also going to take longer to get there.

So, let's complicate this a bit. Let's take our previous bang-bang, but split the response into two different regions: Far away, and closer. This is easier if we introduce a new term: Error. Error just represents the difference between our setpoint and our sensor, simplifying the code and procedure. "Error" helpfully is a useful term, which we'll use a lot.

run(()->{
	setpoint= 15  //your target position, in arbitrary units
	sensor= 0 //read your sensor here
	error = setpoint-sensor 
	if     (error > 5){ output = -1 }
	else if(error > 0){ output = -0.2 }
	else if(error < 0){ output = 0.2 }
	else if(error < -5){ output = 1 }
	motor.set(output)
})

We've now slightly improved things; Now, we can expect more reasonable responses as we're close, and fast responses far away. But we still have the same problem; Those harsh transitions across each else if. Splitting up into more and more branches doesn't seem like it'll help. To resolve the problem, we'd need an infinite number of tiers, dependent on how far we are from our targets.

With a bit of math, we can do that! Our error term tells us how far we are, and the sign tells us what direction we need to go... so let's just scale that by some value. Since this is a constant value, and the resulting output is proportional to this term, let's call it kp: Our proportional constant.

run(()->{
	setpoint= 15  //your target position, in arbitrary units
	sensor= 0 //read your sensor here
	kp = 0.1
	error = setpoint-sensor 
	output = error*kp
	motor.set(output)
)}

Now we have a better behaved algorithm! At a distance of 10, our output is 1. At 5, it's half. When on target, it's zero! It scales just how we want.

Try this on a real system, and adjust the kP until your motor reliably gets to your setpoint, where error is approximately zero.

In doing so, you might notice that you can still oscillate around your setpoint if your gains are too high. You'll also notice that as you get closer, your output drops to zero. This means, at some point you stop being able to get closer to your target.

This is easily seen on an elevator system. You know that gravity pulls the elevator down, requiring the motor to push it back up. For the sake of example, let's say an output of 0.2 holds it up. Using our previous kP of 0.1, a distance of 2 generates that output of 0.2. If the distance is 1, we only generate 0.1... which is not enough to hold it! Our system actually is only stable below where we want. What gives!

This general case is referred to as "standing error" ; Every loop through our PID fails to reduce the error to zero, which eventually settles on a constant value. So.... what if.... we just add that error up over time? We can then incorporate that error into our outputs. Let's do it.

setpoint= 15  //your target position, in arbitrary units
errorsum=0
kp = 0.1
ki = 0.001
run(()->{
	sensor= 0 //read your sensor here
	error = setpoint-sensor
	errorsum += error
	output = error*kp + errorsum*ki
	motor.set(output)
}

The mathematical operation involved here is called integration, which is what this term is called. That's the "I" in PID.
In many practical FRC applications, this is probably as far as you need to go! P and PI controllers can do a lot of work, to suitable precision. This a a very flexible, powerful controller, and can get "pretty good" control over a lot of mechanisms.

This is probably a good time to read across the WPILib PID Controller page; This covers several useful features. Using this built-in PID, we can reduce our previous code to a nice formalized version that looks something like this.

PIDController pid = new PIDController(kP, kI, kD);
run(()->{
	sensor = motor.getEncoder.getPosition();
	motor.set(pid.calculate(sensor, setpoint))
})

A critical detail in good PID controllers is the iZone. We can easily visualize what problem this is solving by just asking "What happens if we get a game piece stuck in our system"?
Well, we cannot get to our setpoint. So, our errorSum gets larger, and larger.... until our system is running full power into this obstacle. That's not great. Most of the time, something will break in this scenario.

So, the iZone allows you to constrain the amount of error the controller actually stores. It might be hard to visualize the specific numbers, but you can just work backward from the math. If output = errorsum*kI, then maxIDesiredTermOutput=iZone*kI. So iZone=maxIDesiredTermOutput/kI.

Lastly, what's the D in PID?

Well, it's less intuitive, but let's try. Have you seen the large spike in output when you change a setpoint? Give the output a plot, if you so desire. For now, let's just reason through a system using the previous example PI values, and a large setpoint change resulting in an error of 20.

Your PI controller is now outputting a value of 2.0 ; That's double full power! Your system will go full speed immediately with a sharp jolt, have a ton of momentum at the halfway point, and probably overshoot the final target. So, what we want to do is constrain the speed; We want it fast but not too fast. So, we want to reduce it according to how fast we're going.
Since we're focusing on error as our main term, let's look at the rate the error changes. When the error is changing fast we want to reduce the output. The difference is simply defined as error-previousError, so a similar strategy with gains gives us output+=kP*(error-previousError) .
This indeed gives us what we want: When the rate of change is high, the contribution is negative and large; Acting to reduce the total output, slowing the corrective action.

However, this term has another secret power, which disturbance rejection. Let's assume we're at a steady position, and the system is settled, and error=0. Now, let's bonk the system downward, giving us a positive error. Suddenly nonzero-0 is positive, and the system generates a upward force. For this interaction, all components of the PID are working in tandem to get things back in place.

Limitations of PIDs

OK, that's enough nice things. Understanding PIDs requires knowing when they work well, and when they don't, and when they actually cause problems.

  • PIDs are reactive, not predictive. Note our key term is "error" ; PIDs only act when the system is already not where you want it, and must be far enough away that the generated math can create corrective action.
  • Large setpoint changes break the math. When you change a setpoint, the P output gets really big, really fast, resulting in an output spike. When the PID is acting to correct it, the errorSum for the I term is building up, and cannot decrease until it's on the other side of the setpoint. This almost always results in overshoot, and is a pain to resolve.
  • Oscillation: PIDs inherently generate oscillations unless tuned perfectly. Sometimes big, sometimes small.
  • D term instability: D terms are notoriously quirky. Large D terms and velocity spikes can result in bouncy, jostly motion towards setpoints, and can result in harsh, very rapid oscillations around the zero, particularly when systems have significant Mechanical Backlash.
  • PIDS vs Hard stops: Most systems have one or more Hard Stops, which present a problem to the I term output. This requires some consideration on how your encoders are initialized, as well as your setpoints.
  • Tuning is either simple....or very time consuming.

So, how do you make the best use of PIDs?

  • Reduce the range of your setpoint changes. There's a few ways to go about it, but the easiest are clamping changes, Slew Rate Limiting and Motion Profiles . With such constraints, your error is always small, so you can tune more aggressively for that range.
  • Utilize FeedForwards to create the basic action; Feed-forwards create the "expected output" to your motions, reducing the resulting error significantly. This means your PID can be tuned to act sharply on disturbances and unplanned events, which is what they're designed for.

In other words, this is an error correction mechanism, and if you avoid adding error to begin with, you more effectively accomplish the motions you want. Throwing a PID at a system can get things moving in a controlled fashion, but care should be taken to recognize that it's not intended as the primary control handler for systems.

Tuning

The math

PID

Requires:
Triggers

Hardware:

  • Switches
  • Encoder
  • LaserCan

Success Criteria

  • Create a Trigger that represents a sensor condition
  • Create a Joystick command that runs indefinitely, but stops when the Trigger is in a true condition.
  • Repeat with a different sensor type
  • Create a Trigger that performs a Command automatically when triggered

Summary

Sensing is interacting with physical objects, and changing robot behaviour based on it.
This can use a variety of sensors and methods, and will change from system to system

Sensor Information transfer

Often simple sensors like break beams or switches can tell you something very useful about the system state, which can help you set up a different, more precise sensor.

The most common application is in Homing such as a Elevator type systems. On boot, your your Encoder may not properly reflect the system state, and thus the elevator position is invalid. But, if you you have a switch at the end of travel you can use this to re-initialize your encoder, as the simple switch.

Sensing Basics

Success Criteria

  • Configure a motor encoder
  • Read an encoder
  • Configure encoder range/units through gearing
  • Enable/Disable Soft Limits

TODO

  • Absolute vs Relative encoders

  • Startup positioning

  • Homing

  • Slew Rate Limiting

Synopsis

An Encoder is a sensor that counts rotations.

Incremental_encoder.gif

Incremental_directional_encoder.gif

Encoder Basics

Success Criteria

  • Start a command when the robot is enabled, and ends automatically

Learning objectives

  • Support command+trigger subsystem interfaces
  • Model system state into binary regions
  • loose coupling of subsystems
  • Tolerances on sensors
  • Joystick buttons = trigger ; Hidden common use case
  • Starting commands with triggers
  • ending commands with triggers
  • sequencing component
Triggers

Success Criteria

TODO
  • Advantages
  • Disadvantages
  • Discontinuity handling
  • Integration with relative encoders
    Homing Sequences

Chinese Remainder Theorem

This is a numerical trick that can allow use of absolute encoders
Elevator
https://en.wikipedia.org/wiki/Chinese_remainder_theorem
Use two different scales
Compare

Absolute Encoders

Requires:
FeedForwards
PID
Reading Resources:
Homing

Success Criteria

  • Create an Elevator subsystem
  • Set Encoders
    • find the encoder range and conversion to have real-world values
    • Find the system range, apply soft limits
  • Get control
    • Determine the system Gravity feed-forward value
    • Create a PID controller
    • Tune the PID to an acceptable level for control
  • Create a default command that holds the system at the current height
  • Create a setHeight function that takes a height, and returns a command that runs indefinitely to the target height
  • Create a Trigger that indicates if the system is within a suitable tolerance of the commanded height.
  • Bind several target positions to a controller
  • Create a small auto sequence that moves to multiple positions in sequence.
SuperStructure Elevator

Recommends:
State Machines

Success Criteria

  • Create an "over the bumper" intake system
  • Add a controller button to engage the intake process. It must retract when released
  • The button must automatically stop and retract the intake when a game piece is retracted

Synopsis

Intake complexity can range from very simple rollers that capture a game piece, to complex actuated systems intertwined with other scoring mechanisms.

A common "over the bumper" intake archetype is a deployed system that

  • Actuates outward past the frame perimeter
  • Engages rollers to intake game piece
  • Retracts with the game piece upon completion of a game piece

The speed of deployment and retraction both impact cycle times, forming a critical competitive aspect of the bot.

The automatic detection and retraction provide cycle advantages (streamlining the driver experience), but also prevent fouls and damage due to the collisions on the deployed mechanism.

SuperStructure Intake

Requires:
Sensing Basics

Success Criteria

  • ???

  • Covering system "state" is very useful, especially in subsystems

  • ConditionalCommand + SelectCommand can be useful for attributing actions and states on simple systems

  • Need to find a sensible formal way to cover it; It's easy to make "custom" state machines for simple systems, but hard to scale up in complexity with consistent patterns.

Possible state model

  • States of Unloaded, unaligned, loaded, scoring

Consideration: Explain state machines here, as an explanation of how they're used and what they represent

Actually make it a workshop later.

State Machines

Requires:
FeedForwards
PID

Success Criteria

  • Create an Arm subsystem
  • Set Encoders
    • find the encoder range and conversion to have real-world angle values
    • Find the system range, apply soft limits
  • Get control
    • Determine the system Gravity feed-forward value
    • Create a PID controller
    • Tune the PID to an acceptable level for control
  • Create a default command that holds the system at the current angle
  • Create a setAngle function that takes a angle, and returns a command that runs indefinitely to the target angle
  • Create a Trigger that indicates if the system is within a suitable tolerance of the commanded height.
  • Bind several target positions to a controller
  • Create a small auto sequence that moves to multiple positions in sequence.
SuperStructure Arm

Success Criteria

  • Configure a Limelight to
  • Identify an April Tag
  • Create a trigger that returns true if a target is in view
  • When a target is in view, print the offset between forward and the target
  • Estimate the distance to the target
  • Configure the LL to identify a game piece of your choice.
  • Indicate angle between forward and game piece.
Limelight Basics

Success Criteria

  • Connect to the LaserCan using GrappleHook
  • Create a new Lasercan sensor subsystem
  • Create a Trigger that returns true while an object is within X" of the sensor
LaserCan

Success Criteria

  • Configure a NavX or gyro on the robot

  • Find a way to zero the sensor when the robot is enabled in auto

  • Create a command that tells you when the robot is pointed the same way as when it started

  • Print the difference between the robot's starting angle and current angle

  • TODO

  • what's an mxp

  • what port/interface to use, usb

  • which axis are you reading

Gyro Sensing

Goals

Understand how to efficiently communicate to and from a robot for diagnostics and control

Success Criteria

Lesson

Glass
  • Graphs
  • Field2D
  • Poses
  • Folders
  • Mechanism2d
Elastic
  • Widget options
  • Driverstation setup
Basic Telemetry

Success Criteria

  • ???

Part of

NetworkTables

Homing is the process of recovering physical system positions on relative encoders.

Part of:

SuperStructure Arm
SuperStructure Elevator
And will generally be done after most requirements for those systems

Success Criteria

  • Home a subsystem using a Command-oriented method
  • Home a subsystem using a state-based method
  • Make a non-homed system refuse non-homing command operations
  • Document the "expected startup configuration" of your robot, and how the homing sequence resolves potential issues.

Lesson Plan

  • Configure encoders and other system configurations
  • Construct a Command that homes the system
  • Create a Trigger to represent if the system is homed or not
  • Determine the best way to integrate the homing operation. This can be
    • Initial one-off sequence on enable
    • As a blocking operation when attempting to command the system
    • As a default command with a Conditional Command
    • Idle re-homing (eg, correcting for slipped belts when system is not in use)

Success Criteria

  • Home an elevator system using system current
  • home an arm system using system current
  • Home a system

What is Homing?

When a system is booted using Relative Encoders, the encoder boots with a value of 0, like you'd expect. However, the real physical system can be anywhere in it's normal range of travel, and the bot has no way to know the difference.

Homing is the process of reconciling this difference, this allowing your code to assert a known physical position, regardless of what position it was in when the system booted.

To Home or not to home

Homing is not a hard requirement of Elevator or Arm systems. As long as you boot your systems in known, consistent states, you operate without issue.

However, homing is generally recommended, as it provides benefits and safeguards

  • You don't need strict power-on procedures. This is helpful at practices when the bot will be power cycled and get new uploaded code regularly.
  • Power loss protection: If the bot loses power during a match, you just lose time when re-homing; You don't lose full control of the bot, or worse, cause serious damage.
  • Improved precision: Homing the system via code ensures that the system is always set to the same starting position.

Homing Methods

Hard stops

When looking at homing, the concept of a "Hard Stop" will come up a lot. A hard stop is simply a physical constraint at the end of a system's travel, that you can reliably anticipate the robot hitting without causing system damage.
In some bot designs, hard stops are free. In other designs, hard stops require some specific engineering design.

Safety first!

Any un-homed system has potential to perform in unexpected ways, potentially causing damage to itself or it's surroundings.
We'll gloss over this for now, but make sure to set safe motor current constraints by default, and only enable full power when homing is complete.

No homing + strict booting process.

With this method, the consistency comes from the physical reset of the robot when first powering on the robot. Humans must physically set all non-homing mechanisms, then power the robot.

From here, you can do anything you would normally do, and the robot knows where it is.

This method is often "good enough", especially for testing or initial bringup. For some robots, gravity makes it difficult to boot the robot outside of the expected condition.

Watch your resets!

With this method, make sure your code does not reset encoder positions when initializing.
If you do, code resets or power loss will cause a de-sync between the booted position and the operational one. You have to trust the motor controller + encoder to retain positional accuracy.

Current Detection

Current detection is a very common, and reliable method within FRC. With this method, you drive the system toward a hard stop, and monitor the system current.

When the system hits the hard stop, the load on your system increases, requiring more power. This can be detected by polling for the motor current. When your system exceeds a specific current for a long enough time, you can assert that your system is homed!

Velocity Detection

Speed Detection works by watching the encoder's velocity. You expect that when you hit the hard stop, the velocity should be zero, and go from there. However, there's some surprises that make this more challenging than current detection.

Velocity measurements can be very noisy, so using a filter is generally required.

This method also suffers from the simple fact that the system velocity will be zero when homing starts. And zero is also the speed you're looking for as an end condition. You also cannot guarantee that the system speed ever increases above zero, as it can start against the hard stop.
As such, you can't do a simple check, but need to monitor the speed for long enough to assert that the system should have moved if it was able to.

Limit Switches

Limit switches are a tried and true method in many systems. You simply place a physical switch at the end of travel; When the bot hits the end of travel, you know where it is.

Mechanical Robustness Required

Limit switches require notable care on the design and wiring to ensure that the system reliably contacts the switch in the manner needed.

The apparent simplicity of a limit switch hides several design and mounting considerations. In an FRC environment, some of these are surprisingly tricky.

  • A limit switch must not act as an end stop. Simply put, they're not robust enough and will fail, leaving your system in an uncontrolled, downward driving stage.
  • A limit switch must be triggered at the end of travel; Otherwise, it's possible to start below the switch.
  • A switch must have a consistent "throw" ; It should trip at the same location every time. Certain triggering mechanisms and arms can cause problems.
  • If the hard stop moves or is adjusted, the switch will be exposed for damage, and/or result in other issues

Because of these challenges, limit switches in FRC tend to be used in niche applications, where use of hard stops is restricted. One such case is screw-driven Linear Actuators, which generate enormous amounts of force at very low currents, but are very slow and easy to mount things to.

Switches also come in multiple types, which can impact the ease of design. In many cases, a magnetic hall effect sensor is optimal, as it's non-contact, and easy to mount alongside a hard stop to prevent overshoot.

Most 3D printers use limit switches, allowing for very good demonstrations of the routines needed.

For designs where hard stops are not possible, consider a Roller Arm Limit Switch and run it against a CAM. This configuration allows the switch to be mounted out of the line of motion, but with an extended throw.

limit-switch-cam.svg

Index Switches

Index switches work similarly to Limit Switches, but the expectation is that they're in the middle of the travel, rather than at the end of travel. This makes them unsuitable as a solo homing method, but useful as an auxiliary one.

Index switches are best used in situations where other homing routines would simply take too long, but you have sufficient knowledge to know that it should hit the switch in most cases.
This can often come up in Elevator systems where the robot starting configuration puts the carriage far away from the nearest limit.

In this configuration, use of a non-contact switch is generally preferred, although a roller-arm switch and a cam can work well.

Absolute Position Sensors

In some cases we can use absolute sensors such as Absolute Encoders or Range Finders to directly detect information about the robot state, and feed that information into our encoders.

This method works very effectively on Arm based systems; Absolute Encoders on an output shaft provide a 1:1 system state for almost all mechanical designs.

Elevator systems can also use these routines using |Range Finders , detecting the distance between the carriage and end of travel.

Clever designers can also use Absolute Encoders for elevators in a few ways

  • You can simply assert a position within a narrow range of travel
  • You can gear the encoder to have a lower resolution across the full range of travel
  • You can use multiple encoders to combine the above global + local states
  • use of the Chinese Remainder Theorem to get position from two differently geared encoders

Time based homing

A relatively simple routine, but just running your system with a known minimum power for a set length of time can ensure the system gets into a known position. After the time, you can reset the encoder.

This method is very situational. It should only be used in situations where you have a solid understanding of the system mechanics, and know that the system will not encounter damage when ran for a set length of time.

Backlash-compensated homing

In some cases you might be able to find the system home state (using gravity or another method), but find backlash is preventing you from hitting desired consistency and reliability.

This is most likely to be needed on Arm systems, particularly actuated Shooter systems. This is akin to a "calibration" as much as it is homing.

In these cases, homing routines will tend to find the absolute position by driving downward toward a hard stop. In doing so, this applies drive train tension toward the down direction. However, during normal operation, the drive train tension will be upward, against gravity.

This gives a small, but potentially significant difference between the "zero" detected by the sensor, and the "zero" you actually want. Notably, this value is not a consistent value, and wear over the life of the robot can impact it.

Similarly, in "no-homing" scenarios where you have gravity assertion, the backlash tension is effectively randomized.

To resolve this, backlash compensation then needs to run to apply tension "upward" before fully asserting a fully defined system state. This is a scenario where a time-based operation is suitable, as it's a fast operation, from a known state. The power applied should also be small, ideally a large value that won't cause actual motion away from your hard stop (meaning, at/below kS+kG ).

For an implementation of this, see CalibrateShooter from Crescendo.

Online position recovery

Nominally, homing a robot is done once at first run, and from there you know the position. However, sometimes the robot has known mechanical faults that cause routine loss of positioning from the encoder's perspective. However, other sensors may be able to provide insight, and help correct the error.
This kind of error most typically shows up in belt or chain skipping.

To overcome these issues, what you can do is run some condition checking alongside your normal runtime code, trying to identify signs that the system is in a potentially incorrect state, and correcting sensor information.

This is best demonstrated with examples:

  • If you home a elevator to the bottom of a drive at position 0, you should never see encoder values be negative. As such, seeing a "negative" encoder value tells you that the mechanism has hit end of travel.
  • If you have a switch at the limit of travel, you can just re-assert zero every time you hit it. If there's a belt slip, you still end up at zero.
  • If an arm should rest in an "up" position, but the slip trends to push it down, retraction failures might have no good detection modes. So, simply apply a re-homing technique whenever the arm is in idle state.
Band-Aid Fix

Online Position Recovery is a useful technique in a pinch. But, as with all other hardware faults, it's best to fix it in hardware. Use only when needed.

If the system is running nominally, these techniques don't provide much value, and can cause other runtime surprises and complexity, so it's discouraged.
In cases where such loss of control is hypothetical or infrequent, simply giving drivers a homing/button tends to be a better approach.

Modelling Un-homed systems in code

When doing homing, you typically have 4 system states, each with their own behavior. Referring it to it as a State Machine is generally simpler

Unhomed

Homing

Homed

NormalOperation

Unhomed

The UnHomed state should be the default bootup state. This state should prepare your system to

  • A boolean flag or state variable your system can utilize
  • Safe operational current limits; Typically this means a low output current or speed control.

It's often a good plan to have some way to manually trigger a system to go into the Unhomed state and begin homing again. This allows your robot drivers to recover from unexpected conditions when they come up. There's a number of ways your robot can lose position during operation, most of which have nothing to do with software.

Homing

The Homing state should simply run the desired homing strategy.

Modeling this sequence tends to be the tricky part, and a careless approach will typically reveal a few issues

  • Modelling the system with driving logic in the subsystem and Periodic routine typically clashes with the general flow of the Command structure.
  • Modelling the Homing as a command can result in drivers cancelling the command, leaving the system in an unknown state
  • And, trying to continuously re-apply homing and cancellation processes can make the drivers frustrated as the system never gets to the known state.
  • Trying to make many other commands check homing conditions can result in bugs by omission.

The obvious takeaway is that however you home, you want it to be fast and ideally run in the Auto sequence. Working with your designers can streamline this process.

Use of the Command decorator withInterruptBehavior(...) allows an easy escape hatch. This flag allows an inversion of how Command are scheduled; Instead of new commands cancelling running ones, this allows your homing command to forcibly block others from getting scheduled.

If your system is already operating on an internal state machine, homing can simply be a state within that state machine.

Homed

This state is easy: Your system can now assert the known position, clear your Homed state, apply updated power/speed constraints, resume normal operation.

Example Implementations

Command Based

Conveniently, the whole homing process actually fits very neatly into the Commands model, making for a very simple implementation

  • init() represents the unhomed state and reset
  • execute() represents the homing state
  • isFinished() checks the system state and indicates completion
  • end(cancelled) can handle the homed procedure
class ExampleSubsystem extends SubsystemBase(){
	SparkMax motor = ....;
	private boolean homed=false;
	ExampleSubsystem(){
		motor.setMaxOutputCurrent(4); // Will vary by system
	}

	public Command goHome(){
		return new FunctionalCommand(
			()->{
				homed=false;
				motor.getAppliedCurrent()
			};
			()->{motor.set(-0.5);};
			()->{return motor.getAppliedCurrent()>4}; //isFinished
			(cancelled)->{
				if(cancelled==false){
					homed = true;
					motor.setMaxOutputCurrent(30);
				}
			};
		)
		// Failsafe in case something goes wrong,since otherwise you 
		// can't exit this command by button mashing
		.withTimeout(5)
		//Prevent other commands from stopping this one
		.withInterruptBehavior(kCancelIncoming);
	}
} 

This command can then be inserted at the start of autonomous, ensuring that your bot is always homed during a match. It also can be easily mapped to a button, allowing for mid-match recovery.

For situations where you won't be running an auto (typical testing and practice field scenarios), the use of Triggers can facilitate automatic checking and scheduling

class ExampleSubsystem extends SubsystemBase(){
	ExampleSubsystem(){
		Trigger.new(Driverstation::isEnabled)
		.and(()->isHomed==false)
		.onTrue(goHome())
	}
}

Alternatively, if you don't want to use the withInterruptBehavior(...) option, you can hijack other command calls with Commands.either(...) or new ConditionalCommand(...)

class ExampleSubsystem extends SubsystemBase(){
/* ... */
	//Intercept commands directly to prevent unhomed operation
	public Command goUp(){
		return Commands.either(
		Commands.run(()->motor.0.5)
		goHome(),
		()->isHomed
	}
/* ... */
Homing Sequences

Success Criteria

  • Create a button that causes the robot to face a bearing of 0 degrees
  • Create 3 additional buttons to face 90, 180, and 270.
  • Ensure that the drivers can hold those buttons and use the throttle to drive in the indicated direction
  • Create an 4 step auto that traverses a predefined path using gyro headings and encoder distances
Gyro Driving

Success Criteria

  • Create a button that aims your chassis at a target
  • Ensure the above button allows drivers to move toward and away from target while it's held
  • Create a pipeline that allows you to drive to the left/right of the target instead of directly at it
  • Create a button that aligns you with a game piece, and allows drivers to drive at it
Limelight Assist Driving
See Official Documentation

The official radio documentation is complete and detailed, and should serve as your primary resource.
https://frc-radio.vivid-hosting.net/

However, It's not always obvious what you need to look up to get moving. Consider this document just a simple guide and jumping-off point to find the right documentation elsewhere

Setting up the radio for competition

You don't! The Field Technicians at competitions will program the radio for competitions.

When configured for competition play, you cannot connect to the radio via wifi. Instead, use an ethernet cable, or

Setting up the radio for home

The home radio configuration is a common pain point

Option 1: Wired connection

This option is the simplest: Just connect the robot via an ethernet or USB, and do whatever you need to do. For quick checks, this makes sense, but obviously is suboptimal for things like driving around.

Option 2: 2.4GhZ Wifi Hotspot

The radio does have a 2.4ghz wifi hotspot, albeit with some limitations. This mode is suitable for many practices, and is generally the recommended approach for most every-day practices due to ease of use.

Note, this option requires access to the tiny DIP switches on the back of the radio! You'll want to make sure that your hardware teams don't mount the radio in a way that makes this impossible to access.

Option 3: Tethered Bridge

This option uses a second radio to connect your laptop to the robot. This is the most cumbersome and limited way to connect to a robot, and makes swapping who's using the bot a bit more tricky.

However, this is also the most performant and reliable connection method. This is recommended when doing extended driving sessions, final performance tuning, and other scenarios where you're trying to simulate competition-ready environments.

This option has a normal robot on one end, and your driver-station setup will look the following image. See https://frc-radio.vivid-hosting.net/overview/practicing-at-home for full setup directions
vidid-radio-wifi-bridge.png.png

Bonus Features

Port Forwarding

Port forwarding allows you to bridge networks across different interfaces.

The practical application in FRC is being able to access network devices via the USB interface! This is mostly useful for quickly interfacing with Vision hardware like the Limelight or Photonvision at competitions.

//Add in the constructor in Robot.java or RobotContainer.java

// If you're using a Limelight
PortForwarder.add(5800, "limelight.local", 5800);
// If you're using PhotonVision
PortForwarder.add(5800, "photonvision.local", 5800);

Scripting the radio

The radio has some scriptable interfaces, allowing programmatic access to quickly change or read settings.

Robot Radio

Goals

Understand the typical Git operations most helpful for day-to-day programming

Completion Requirements

This module is intended to be completed alongside other tasks.

  • Initialize a git repository in your project
  • Create an initial commit
  • Create several commits representing simple milestones in your project
  • When moving to a new skill card, create a new branch to represent it. Create as many commits on the new branch as necessary to track your work for this card.
  • When working on a skill card that does not rely on the previous branch, switch to your main branch, and create a new branch to represent that card.
  • On completion of that card (or card sequence), merge the results of both branches back into Main.
  • Upon resolving the merge, ensure both features work as intended.

Topic Summary

  • Understanding git
  • workspace, staging, remotes
  • fetching
  • Branches + commits
  • Pushing and pulling
  • Switching branches
  • Merging
  • Merge conflicts and resolution
  • Terminals vs integrated UI tools

In general

mainfeatureName0-25030881-6f9937d2-f8c73053-adbeaef5-d23afd36-df86d80

Git Fundamentals

Git is a "source control" tool intended to help you manage source code and other text data.

Git has many superpowers, but the basic level provides "version control"; This allows you to create "commits", which allow you to capture your code's state at a point in time. Once you have these commits, git lets you go back in time, compare to what you've done, and more.

mainnew empty projectAdded a subsystemAdded another subsystemadd commandsReady to go to competition

Diffs

Fundamental to Git is the concept of a "difference", or a diff for short. Rather than just duplicating your entire project each time you want to make a commit snapshot, Git actually just keeps track of what you've changed.

In a simplified view, updating this simple subsystem

/**Example class that does a thing*/
class ExampleSubsystem extends SubsystemBase{
	private SparkMax motor = new SparkMax(1);
	ExampleSubsystem(){}
	public runMotor(){
		motor.run(1);
	}
	public stop(){/*bat country*/}
	public go(){/*fish*/}
}

to this

/**Example class that does a thing*/
class ExampleSubsystem extends SubsystemBase{
	private SparkMax motor = new SparkMax(1);
	private Encoder encoder = new Encoder();
	ExampleSubsystem(){}
	public runMotor(double power){
		motor.run(power);
	}
	public stop(){/*bat country*/}
	public go(){/*fish*/}
}

would be stored in Git as

class ExampleSubsystem extends SubsystemBase{
	private SparkMax motor = new SparkMax(1);
+	private Encoder encoder = new Encoder();
	ExampleSubsystem(){}
-	public runMotor(1){
-		motor.run(1);
+	public runMotor(double power){
+		motor.run(power);
	}
	public stop(){/*bat country*/}

With this difference, the changes we made are a bit more obvious. We can see precisely what we changed, and where we changed it.
We also see that some stuff is missing in our diff: the first comment is gone, and we don't see go or our closing brace. Those didn't change, so we don't need them in the commit.

However, there are some unchanged lines, near the changed lines. Git refers to these as "context". These help Git figure out what to do in some complex operations later. It's also helpful for us humans just taking a casual peek at things. As the name implies, it helps you figure out the context of that change.

We also see something interesting: When we "change" a line, Git actually

  • Marks it as deleted
  • Marks it as added
    Simply put, just removing a line and then adding the new one is just easier most of the time. However, some tools detect this, and will bold or highlight the specific bits of the line that changed.

Commits + Branches

Now that we have some changes in place, we want to "Commit" that change to Git, adding it to our project's history.

A commit in git is a just a bunch of changes, along with some extra data. The most relevant is

  • A commit "hash", which is a unique key representing that specific change set
  • The "parent" commit, which these changes are based on
  • The actual changes + files they belong to.
  • Date, time, and author information
  • A short human readable "description" of the commit.

These commits form a sequence, building on top from the earliest state of the project. We generally assign a name to these sequences, called "branches".

A typical project starts on the "main" branch, after a few commits, you'll end up with a nice, simple history like this.

mainnew empty projectAdded a subsystemAdded another subsystemadd commandsReady to go to competition

It's worth noting that a branch really is just a name that points to a commit, and is mostly a helpful book-keeping feature. The commits and commit chain do all the heavy lifting. Basically anything you can do with a branch can be done with a commit's hash instead!

Multiple Branches + Switching

We're now starting to get into Git's superpowers. You're not limited to just one branch. You can create new branches, switch to them, and then commit, to create commit chains that look like this:

maincompetitionnew empty projectAdded a subsystemAdded another subsystemadd commandsReady to go to competitionmess for qual 4mess for qual 8

Here we can see that mess for qual 4 and mess for qual 8 are built off the main branch, but kept as part of the competition branch. This means our main branch is untouched. We can now switch back and forth using git switch main and git switch competition to access the different states of our codebase.

We can, in fact, even continue working on main adding commits like normal.

maincompetitionnew empty projectAdded a subsystemAdded another subsystemadd commandsReady to go to competitionmess for qual 4mess for qual 8added optional sensor

Being able to have multiple branches like this is a foundational part of how Git works, and a key detail of it's collaborative model.

However, you might notice the problem: We currently can access the changes in competition or main, but not both at once.

Merging

Merging is what allows us to do that. It's helpful to think of merging the changes from another branch into your current branch.

If we merge competition into main, we get this. Both changes ready to go! Now main can access the competition branch's changes.

maincompetitionnew empty projectAdded a subsystemAdded another subsystemadd commandsReady to go to competitionmess for qual 4mess for qual 8added optional sensormerge comp into main

However, we can equally do main into competition, granting competition access to the changes in main.

maincompetitionnew empty projectAdded a subsystemAdded another subsystemadd commandsReady to go to competitionmess for qual 4mess for qual 8added optional sensormerge main into comp

Now that merging is a tool, we have unlocked the true power of git. Any set of changes is built on top of eachother, and we can grab changes without interrupting our existing code and any other changes we've been making!

This feature powers git's collaborative nature: You can pull in changes made by other people just as easily as you can your own. They just have to have the same parent somewhere up the chain so git can figure out how to step through the sequence of changes.

Branch Convention

Workspace, Staging, Origin

Git is a distributed system, and as such has a few different places that all these changes can live.

The most apparent one is your actual code on your laptop, forming the workspace. As far as you're concerned, this is just the files in the directory. However, Git sees them as the culmination of all changes committed in the current branch, plus any uncommitted changes.

The next one is "staging": This is just the incomplete next commit, and holes all the changes you've added as part of it. Once you properly commit these changes, your staging will be cleared, and you'll have a new commit in your tree.

It basically looks like this:

mainnew empty projectAdded a subsystemAdded another subsystemadd commandsReady to go to competitionstagingworkspace

Next is a "remote", representing a computer somewhere else. In most Git work, this is just Github. There's several commands focused on interacting with your remote, and this just facilitates collaborative work and offsite backup.

Handling Merge Conflicts

class ExampleSubsystem extends SubsystemBase{
	private SparkMax motor = new SparkMax(1);
+	private Encoder encoder = new Encoder();
	ExampleSubsystem(){}
-	public runMotor(1){
-		motor.run(1);
+	public runMotor(double power){
+		motor.run(power);
	}
	public stop(){/*bat country*/}

The critical commands

git init: This creates a new git repository for your current project. You want to run this in the base git add `

Git from VSCode

Other Git tools

There's a lot of tools that interact with your Git repository, but it's worth being mindful about which ones you pick! Many tools do unexpected

Git Basics

Success Criteria

  • Examine a robot design
  • Generate a design plan indicating the breakdown

Recommended:

Synopsis

This guide runs through how to examine a robot design, analyze the mechanics, game piece path, and form a plan to generate a code structure to control the system.

General process flow

Track the Game Piece Flow

For an FRC bot, "move the game piece" is the fundamental design objective, and serves as a great way to step through the bot.

Quick breakdown of mechanism classes

Being able to identify basic mechanisms is key to being able to model a robot in code. This non-exhaustive list should help provide some vocabulary for the analysis on typical bots.

Rollers The simplest mechanical system: A motor and a shaft that spins.
Flywheel A specialized Roller system with extra weight, intended to maintain a speed when launching objects.
Indexer A mechanism to precisely align, prepare, or track game pieces internally in the robot. Often a Roller, but can be very complex.
Shooter A compound system, usually consisting of at least a Flywheel and an Indexer, and sometimes an Arm or other aiming structure.
Intake A specialized compound system intended for getting new game pieces into the robot. Generally consists of a Roller, often with another positioning device.
Arm A system that rotates around a pivot point. Usually positions another subsystem.
Elevator A system that travels along a linear track of some sort. Typically up/down, hence the name.
Swerve Drive or Drivetrain: Makes robot go whee along the ground.

Case Study: 2024 Crescendo Bot

Crescendo Bot Code
Note The actual code for this bot may differ from this breakdown; This is based on the initial design provided, not the final version after testing.

Game info

The game piece for this game is an 2" tall, 14" diameter orange donut called a "note", and will be referenced throughout this breakdown.
There are two note scoring objectives: Shooting into an open "speaker" slot about 8" high, or placing into an "amp", which is a 2" wide slot about 24" off the ground.
Lastly, climbing is an end game challenge, with an additional challenge of the "trap", which is effectively score an amp while performing a climb.

Mechanism Breakdown

For this, we'll start with the game piece (note) path, and just take note of what control opportunities we have along this path.

The note starts on the floor, and hits the under-bumper intake. This is a winding set of linked rollers driven by a single motor. This system has rigid control of the note, ensuring a "touch it own it" control path.

Indexer: The game piece is then handed off to an indexer (or "passthrough"). This system is two rollers + motors above and below the note path, and have a light hold on the note; Just enough to move it, but not enough to fight other systems for physical control.

Flywheel: The next in line is a Flywheel system responsible for shooting. This consists of two motors (above and below the note's travel path), and the rollers that make physical contact. When shooting, this is the end of game piece path. This has a firm grip to impart significant momentum quickly.

Dunkarm + DunkarmRollers: When amp scoring/placing notes, we instead hand off to the rollers in front of the shooter. These rollers are mounted on an arm, which move the rollers out of the way of the shooter, or can move it upward.

Shooter: The Indexer and Shooter are mounted on a pivoting arm, which we denote as the shooter. This allows us to set the note angle.

Climber: The final mechanism is the climber. There's two climber arms, each with their own motor.

Sensing

The indexer has a single LaserCan rangefinder, located a just before the shooter. This will allow an analog view of the note position in the system.

Constraints + Conflicts

Again let's follow the standard game piece and flow path.

  • Intake+Indexer interaction: A note can be held by both Intake and Indexer simultaneously. In this case, the intake exerts control.
  • Intake + Shooter : If the shooter angle is too high, note transfer to the indexer will fail.
  • Indexer + Flywheel: A note can be held by both intake and indexer simultaneously. Again, the shooter wins, but fighting the indexer would impact accuracy/momentum.
  • Flywheel + Dunkarms: A note getting passed into the dunkarm rollers is held more strongly by the shooter than the dunkarm rollers. This can only be done at a fixed angle combination.
  • Climber + Dunkarms. When climbing without trap scoring, the chain will come down on the dunkarms. The climber will win with catastrophic damage.
  • Climber + Shooter: When trap scoring, the dunk arms are out of the way, but the chains will come down on the shooter.
  • Dunkarm and shooter: The rollers can potentially be in the way of the shot path. This can occur at low shot angles, or if the dunkarm is moving down after other scoring.

This seems to be about it for conflicts between control systems

We should also do a quick check of the hard stops; These serve as reference points and physical constraints.

  • Dunkarms have a lower hard stop. It has no upper hard stop, but eventually rotates too far and breaks wiring.
  • Shooter has a bottom hard stop. It has a upper end of travel, but no physical stop.
  • Climber has a bottom hard stop, and a upper end of travel. In both cases, high torque movement will cause damage when ran into.
  • All other systems are rollers, with no hard or soft stops.

Critical code tasks

Before getting into how the code is structured, let's decide what the code should be doing during normal gameplay cycles

  • Intake note: This will be running the intake, feeding the note into the indexer. Since running too far will feed it into the shooter, we can use our indexer sensor to end the process.
  • Shot preparation: Before shooting a note, we need to make sure the note is not pushed into the flywheels. Once that's done, we need to get to the target angle and speed.
  • Shooting: The indexer needs to feed the note into the flywheel.
  • Score amp: This takes a note held in the dunk arm rollers, rotate up, and then rotate the rollers to score it.
  • Load dunkarm rollers: This requires the dunkarm + shooter to be in the desired lineup, then the indexer feeds the note into the shooter, which in turn feeds it into the dunk arm rollers. The rollers must be capable of stopping/managing the note to prevent it from falling out during this process.
  • Climbing, no trap: Climber goes up, climber comes down, and doesn't crush the dunkarms.
  • Scoring Trap: This requires maneuvering around the chain to get the dunkarms in position. From there, it's just climbing, then amp scoring.

Code Breakdown

We can now start looking at how to structure the code to make this robot happen. Having a good understanding of Command flow helps here.

We'll start with subsystems breakdowns. Based on the prior work, we know there's lots of loose coupling: Several subsystems are needed for multiple different actions, but nothing is strongly linked. The easy ones are:

  • Intake (1 motor)
  • Indexer (2 motors, the top and bottom)
  • Shooter (the pivot motor)
  • Flywheels (the two motors, top and bottom)
  • Climber (two motors, left and right)
    This allows commands to pair/link actions, or allow independent responses.

The Dunkarm + Dunkarm rollers is less clear. From an automation perspective, we could probably combine these. But the humans will want to have separate buttons for "put arm in position" and "score the note". To avoid command sequence conflicts, we'd want these separate.

  • Dunkarm (1 motor, the pivot)
  • Dunkarm Rollers (1 motor for the roller pair)

Next we define what the external Command API for each subsystem should look like so we can manipulate them.

  • Intake:

    • Intake command to pull in a note.
    • Eject: In case we have to get rid of a note
  • Flywheel:

    • Shoot note. Would need the appropriate RPM, which may be constant, or vary based on vision/sensor data.
    • Pass to dunk arm. This is likely just running at a target RPM, but may use other logic/control routines.
    • Retract note: In case something goes wrong, perhaps we want to pull the note in and clear the shooter for a reset.
    • isAtTargetRPM check, which is critical for sequencing
  • Shooter Pivot

    • SetAngle
    • isAtTargetPosition for sequencing
  • Dunkarm:

    • Set Angle. Would just need an angle reference.
    • isAtTargetAngle check for sequencing
    • Manual positioning: The human may control this directly for trap score line up
  • Dunkarm Rollers:

    • Load, which is likely just a speed/power suitable for controlled intaking from the robot
    • Score Amp, another speed/power appropriate for this task
    • Score Trap, Another speed/power for the task
    • drop/eject, just get rid of a note if something went wrong.
  • Climber:

    • set height: Basically the only job of this system is go up/go down
    • Is At Target Height check. Maybe useful for sequencing
  • Indexer

    • hasNote check; This is required for "end of intake"
Robot Design Analysis