SuperStructure Focus
Swerve Focus
Vision Focus
Simulation, Logging, Visualization
Specialized Dependencies

Recommends:
State Machines

Success Criteria

  • Create an "over the bumper" intake system
  • Add a controller button to engage the intake process. It must retract when released
  • The button must automatically stop and retract the intake when a game piece is retracted

Synopsis

Intake complexity can range from very simple rollers that capture a game piece, to complex actuated systems intertwined with other scoring mechanisms.

A common "over the bumper" intake archetype is a deployed system that

  • Actuates outward past the frame perimeter
  • Engages rollers to intake game piece
  • Retracts with the game piece upon completion of a game piece

The speed of deployment and retraction both impact cycle times, forming a critical competitive aspect of the bot.

The automatic detection and retraction provide cycle advantages (streamlining the driver experience), but also prevent fouls and damage due to the collisions on the deployed mechanism.

SuperStructure Intake

Requires:
Motor Control

Recommends:
FeedForwards
PID

Success Criteria

  • Create a Roller subsystem
  • Calibrate the system to use RPM
  • Create Commands for running forward and backwards using the target RPM
  • Bind them to commands to move the game piece forward or backward
  • Create a default command that stops the subsystem
SuperStructure Rollers

Goals:

  • Understand how swerve works
  • Teleop Interactions for existing swerve
  • Reading odometry
  • Reset/Initialize odometry

Success Criteria

  • Use an existing Swerve configuration
Swerve Basics

Success Criteria

  • Create a PID system on a test bench
  • Tune necessary PIDs using encoders
  • Set a velocity using a PID
  • Set a angular position using a PID
  • Set a elevator position using a PID
  • Plot the system's position, target, and error as you command it.

TODO

Synopsis

A PID system is a Closed Loop Controller designed to reduce system error through a simple, efficient mathematical approach.

You may also appreciate Chapter 1 and 2 from controls-engineering-in-frc.pdf , which covers PIDs very well.

Deriving a PID Controller from scratch

To get an an intuitive understanding about PIDs and feedback loops, it can help to start from scratch, and kind of recreating it from the basic assumptions and simple code.

Let's start from the core concept of "I want this system to go to a position and stay there".

Initially, you might simply say "OK, if we're below the target position, go up. If we're above the target position, go down." This is a great starting point, with the following pseudo-code.

setpoint= 15  //your target position, in arbitrary units
sensor= 0 //Initial position
if(sensor < setpoint){ output = 1 }
else if(sensor > setpoint){ output = -1 }
motor.set(output)

However, you might see a problem. What happens when setpoint and sensor are equal?

If you responded with "It rapidly switches between full forward and full reverse", you would be correct. If you also thought "This sounds like it might damage things", then you'll understand why this controller is named a "Bang-bang" controller, due to the name of the noises it tends to make.

Your instinct for this might be to simply not go full power. Which doesn't solve the problem, but reduces it's negative impacts. But it also creates a new problem. Now it's going to oscillate at the setpoint (but less loudly), and it's also going to take longer to get there.

So, let's complicate this a bit. Let's take our previous bang-bang, but split the response into two different regions: Far away, and closer. This is easier if we introduce a new term: Error. Error just represents the difference between our setpoint and our sensor, simplifying the code and procedure. "Error" helpfully is a useful term, which we'll use a lot.

run(()->{
	setpoint= 15  //your target position, in arbitrary units
	sensor= 0 //read your sensor here
	error = setpoint-sensor 
	if     (error > 5){ output = -1 }
	else if(error > 0){ output = -0.2 }
	else if(error < 0){ output = 0.2 }
	else if(error < -5){ output = 1 }
	motor.set(output)
})

We've now slightly improved things; Now, we can expect more reasonable responses as we're close, and fast responses far away. But we still have the same problem; Those harsh transitions across each else if. Splitting up into more and more branches doesn't seem like it'll help. To resolve the problem, we'd need an infinite number of tiers, dependent on how far we are from our targets.

With a bit of math, we can do that! Our error term tells us how far we are, and the sign tells us what direction we need to go... so let's just scale that by some value. Since this is a constant value, and the resulting output is proportional to this term, let's call it kp: Our proportional constant.

run(()->{
	setpoint= 15  //your target position, in arbitrary units
	sensor= 0 //read your sensor here
	kp = 0.1
	error = setpoint-sensor 
	output = error*kp
	motor.set(output)
)}

Now we have a better behaved algorithm! At a distance of 10, our output is 1. At 5, it's half. When on target, it's zero! It scales just how we want.

Try this on a real system, and adjust the kP until your motor reliably gets to your setpoint, where error is approximately zero.

In doing so, you might notice that you can still oscillate around your setpoint if your gains are too high. You'll also notice that as you get closer, your output drops to zero. This means, at some point you stop being able to get closer to your target.

This is easily seen on an elevator system. You know that gravity pulls the elevator down, requiring the motor to push it back up. For the sake of example, let's say an output of 0.2 holds it up. Using our previous kP of 0.1, a distance of 2 generates that output of 0.2. If the distance is 1, we only generate 0.1... which is not enough to hold it! Our system actually is only stable below where we want. What gives!

This general case is referred to as "standing error" ; Every loop through our PID fails to reduce the error to zero, which eventually settles on a constant value. So.... what if.... we just add that error up over time? We can then incorporate that error into our outputs. Let's do it.

setpoint= 15  //your target position, in arbitrary units
errorsum=0
kp = 0.1
ki = 0.001
run(()->{
	sensor= 0 //read your sensor here
	error = setpoint-sensor
	errorsum += error
	output = error*kp + errorsum*ki
	motor.set(output)
}

The mathematical operation involved here is called integration, which is what this term is called. That's the "I" in PID.
In many practical FRC applications, this is probably as far as you need to go! P and PI controllers can do a lot of work, to suitable precision. This a a very flexible, powerful controller, and can get "pretty good" control over a lot of mechanisms.

This is probably a good time to read across the WPILib PID Controller page; This covers several useful features. Using this built-in PID, we can reduce our previous code to a nice formalized version that looks something like this.

PIDController pid = new PIDController(kP, kI, kD);
run(()->{
	sensor = motor.getEncoder.getPosition();
	motor.set(pid.calculate(sensor, setpoint))
})

A critical detail in good PID controllers is the iZone. We can easily visualize what problem this is solving by just asking "What happens if we get a game piece stuck in our system"?
Well, we cannot get to our setpoint. So, our errorSum gets larger, and larger.... until our system is running full power into this obstacle. That's not great. Most of the time, something will break in this scenario.

So, the iZone allows you to constrain the amount of error the controller actually stores. It might be hard to visualize the specific numbers, but you can just work backward from the math. If output = errorsum*kI, then maxIDesiredTermOutput=iZone*kI. So iZone=maxIDesiredTermOutput/kI.

Lastly, what's the D in PID?

Well, it's less intuitive, but let's try. Have you seen the large spike in output when you change a setpoint? Give the output a plot, if you so desire. For now, let's just reason through a system using the previous example PI values, and a large setpoint change resulting in an error of 20.

Your PI controller is now outputting a value of 2.0 ; That's double full power! Your system will go full speed immediately with a sharp jolt, have a ton of momentum at the halfway point, and probably overshoot the final target. So, what we want to do is constrain the speed; We want it fast but not too fast. So, we want to reduce it according to how fast we're going.
Since we're focusing on error as our main term, let's look at the rate the error changes. When the error is changing fast we want to reduce the output. The difference is simply defined as error-previousError, so a similar strategy with gains gives us output+=kP*(error-previousError) .
This indeed gives us what we want: When the rate of change is high, the contribution is negative and large; Acting to reduce the total output, slowing the corrective action.

However, this term has another secret power, which disturbance rejection. Let's assume we're at a steady position, and the system is settled, and error=0. Now, let's bonk the system downward, giving us a positive error. Suddenly nonzero-0 is positive, and the system generates a upward force. For this interaction, all components of the PID are working in tandem to get things back in place.

Limitations of PIDs

OK, that's enough nice things. Understanding PIDs requires knowing when they work well, and when they don't, and when they actually cause problems.

  • PIDs are reactive, not predictive. Note our key term is "error" ; PIDs only act when the system is already not where you want it, and must be far enough away that the generated math can create corrective action.
  • Large setpoint changes break the math. When you change a setpoint, the P output gets really big, really fast, resulting in an output spike. When the PID is acting to correct it, the errorSum for the I term is building up, and cannot decrease until it's on the other side of the setpoint. This almost always results in overshoot, and is a pain to resolve.
  • Oscillation: PIDs inherently generate oscillations unless tuned perfectly. Sometimes big, sometimes small.
  • D term instability: D terms are notoriously quirky. Large D terms and velocity spikes can result in bouncy, jostly motion towards setpoints, and can result in harsh, very rapid oscillations around the zero, particularly when systems have significant Mechanical Backlash.
  • PIDS vs Hard stops: Most systems have one or more Hard Stops, which present a problem to the I term output. This requires some consideration on how your encoders are initialized, as well as your setpoints.
  • Tuning is either simple....or very time consuming.

So, how do you make the best use of PIDs?

  • Reduce the range of your setpoint changes. There's a few ways to go about it, but the easiest are clamping changes, Slew Rate Limiting and Motion Profiles . With such constraints, your error is always small, so you can tune more aggressively for that range.
  • Utilize FeedForwards to create the basic action; Feed-forwards create the "expected output" to your motions, reducing the resulting error significantly. This means your PID can be tuned to act sharply on disturbances and unplanned events, which is what they're designed for.

In other words, this is an error correction mechanism, and if you avoid adding error to begin with, you more effectively accomplish the motions you want. Throwing a PID at a system can get things moving in a controlled fashion, but care should be taken to recognize that it's not intended as the primary control handler for systems.

Tuning

The math

PID

Success Criteria

  • Configure a motion system with PID and FeedForward
  • Add a trapezoidal motion profile command (runs indefinitely)
  • Create a decorated version with exit conditions
  • Create a small auto sequence to cycle multiple points
  • Create a set of buttons for different setpoints
Motion Profiles
Inverse Kinematics
Forward Kinematics

Synopsis

Superstructure component that adds additional control axes between intakes and scoring mechanisms. In practice, indexers often temporarily act as part of those systems at different points in time, as well performing it's own specialized tasks.

Common when handling multiple game pieces for storage and alignment, game pieces require re-orientation, adjustment or temporary storage, and for flywheel systems which need to isolate game piece motion from spinup.

Success Criteria

  • ???

Code Considerations

Setting up an indexer is often a challenging process. It will naturally inherit several design goals and challenges from the systems it's connected to. This means it will often have a more complex API than most systems, often adopting notation from the connected systems.

The Indexer is often sensitive to hardware design quirks and changes from those adjacent systems, which can change their behavior, and thus the interfacing code.

Additionally, game piece handoffs can be mechanically complex, and imperfect. Often Indexers absorb special handling and fault detection, or at least bring such issues to light. Nominally, any such quirks are identified and hardware solutions implemented, or additional sensing is provided to facilitate code resolutions.

Sensing

Indexers typically require some specific information about the system state, and tend to be a place where some sort of sensor ends up as a core operational component. The exact type and placement can vary by archtype, but often involve

  • Break beam sensors: These provide a non-contact, robust way to check game piece
  • Current/speed sensing: Many game pieces can be felt by the

Indexer Archtypes

Superstructure Indexer

Superstructure component that holds a large amount of kinetic energy at a high velocity. Typically paired with shooters.

Success Criteria

  • Create a Flywheel system
  • Tune with appropriate FeedForwards + PID to hit and maintain target RPMs
SuperStructure Flywheel

A shooter is simply a flywheel and supporting infrastructure for making game pieces fly from a robot

Success Criteria

Typically a "shooter" consists of

  • a SuperStructure Flywheel to serve as a mechanical means to maintain momentum
  • A Superstructure Indexer to time shots and ensure the shooter is at the indended speed
  • A targeting system, often using Odometry or Vision
  • A trajectory evaluation to control target RPM. This can be fixed targets, Lookup Tables, or more complex trajectory calculations
Superstructure Shooter

Success Criteria

Advanced computation for calculating optimal shot angles and rpms

Flight Trajectory Calculations

Goals:

Interface with swerve for autos and non-trivial teleop interactions

Success Criteria

  • Changing point of rotation in real time
  • Move from Point to Point using a PID
  • Move from point to point using a motion profile
  • Create a command that allows translation while aimed at a bearing
  • Create a command that allows translation while aimed at a Pose2d
Swerve Motion

Success Criteria

  • Create new drive subsystem
  • Create and configure a YAGSL drivetain
  • Tune Yagsl drivetrain and controls for manual driving
  • Adjust parameters to ensure accurate auto driving and odometry tracking
Swerve Bringup
Swerve Odometry

Success Criteria

  • Set up a mock project with a nominal, standard code structure

Project should have a subsystem that

  • Is in a subsystem folder
  • Has 3 components in a (logic, Physics Simulation, Mechanism2d)
  • Has a factory method to get a control command (can be mocked up)
  • Has a trigger that indicates a mechanism state (can be mocked up based on timers)
    Has an additional sensor subsystem system that
  • Provides a trigger for a condition (can be mocked up)
    Has a controller and
    Has an Autos class to hold autos
  • With an auto chooser initialization
  • A single basic auto using subsystem and sensor
Code Structuring

Success Criteria

  • Oh no
PhotonVision Model Training

Goals

Interact with the PhotoVision UI and basic code structures

Success Criteria

  • Connect to the WebUI
  • Set up a camera
  • Set up AprilTag Target
  • Read target position via NT

Port Forwarding

This allows you to access PhotonVision via the roborio USB port.
This can be useful when debugging at competitions
https://docs.photonvision.org/en/latest/docs/quick-start/networking.html

PhotonVision Basics

Success Criteria

  • Set up a pipeline to identify april tags
  • Configure camera position relative to robot center
  • Set up a
PhotonVision Odometry

Success Criteria

  • Configure the PV networking
  • Configure the PV hardware
  • Set up a camera
  • Create a Vision code class
  • Configure PV class to communicate with the hardware
PhotonVision Bringup

Goals

Understand how to efficiently communicate to and from a robot for diagnostics and control

Success Criteria

Lesson

Glass
  • Graphs
  • Field2D
  • Poses
  • Folders
  • Mechanism2d
Elastic
  • Widget options
  • Driverstation setup
Basic Telemetry

Success Criteria

  • Create a standard Arm or Elevator
  • Model the system as a Mechanism2D
  • Create a Physics model class
  • Configure the physics model
  • Tune the model to react in a sensible way. It does not need to match a real world model
Physics Simulation

Success Criteria

  • Create a basic Arm or Elevator motion system
  • Create a Mechanism 2D representation of the control angle
  • Create additional visual flair on mechanism to help indicate mechanism context
Mechanism2d

Success Criteria

  • Add a Object detection pipeline
  • Detect a game piece using color detection
  • if available, detect it using a ML object model
PhotonVision Object Detection

AdvantageKit?

Success Criteria

  • ??? Do we need or want this here?
  • Need to find a way to actually use it efficiently in beneficial way
AdvantageKit

Success Criteria

  • Choose a PathPlanning tool
  • Implement the Java framework for the selected tool
  • Model the robot's physical parameters for your tool
  • Drive a robot along a target trajectory using one of these tools

Planning vs other method

Do you need path planning to make great autos? Maybe! But not always.

PathPlanning can give you extremely fast, optimized autos, allowing you to squeeze every fraction of a second from your auto. However, it can be challenging to set up, and has a long list of requirements to get even moderate performance.

Further Research

Pure Pursuit

Unlike "path planning" algorithms that attempt to define and predict robot motion, Pure Pursuit simply acts as a reactive path follower, as the name somewhat implies.

pathfinding-pure-pursuit.png

This algorithm is fairly simple and conceptually straightforward, but with some notable limitations. However, the concept is very useful for advancing simpler autos

PathPlanning Tools

Success Criteria

  • Write the tuning functions for a system
  • Get the system ID values
  • Update the system with the values
System Identification

Goals

Understand the typical Git operations most helpful for day-to-day programming

Completion Requirements

This module is intended to be completed alongside other tasks.

  • Initialize a git repository in your project
  • Create an initial commit
  • Create several commits representing simple milestones in your project
  • When moving to a new skill card, create a new branch to represent it. Create as many commits on the new branch as necessary to track your work for this card.
  • When working on a skill card that does not rely on the previous branch, switch to your main branch, and create a new branch to represent that card.
  • On completion of that card (or card sequence), merge the results of both branches back into Main.
  • Upon resolving the merge, ensure both features work as intended.

Topic Summary

  • Understanding git
  • workspace, staging, remotes
  • fetching
  • Branches + commits
  • Pushing and pulling
  • Switching branches
  • Merging
  • Merge conflicts and resolution
  • Terminals vs integrated UI tools

In general

mainfeatureName0-ba962281-ad4b3692-508a4783-46bb4c15-596093c6-a0239f6

Git Fundamentals

Git is a "source control" tool intended to help you manage source code and other text data.

Git has many superpowers, but the basic level provides "version control"; This allows you to create "commits", which allow you to capture your code's state at a point in time. Once you have these commits, git lets you go back in time, compare to what you've done, and more.

mainnew empty projectAdded a subsystemAdded another subsystemadd commandsReady to go to competition

Diffs

Fundamental to Git is the concept of a "difference", or a diff for short. Rather than just duplicating your entire project each time you want to make a commit snapshot, Git actually just keeps track of what you've changed.

In a simplified view, updating this simple subsystem

/**Example class that does a thing*/
class ExampleSubsystem extends SubsystemBase{
	private SparkMax motor = new SparkMax(1);
	ExampleSubsystem(){}
	public runMotor(){
		motor.run(1);
	}
	public stop(){/*bat country*/}
	public go(){/*fish*/}
}

to this

/**Example class that does a thing*/
class ExampleSubsystem extends SubsystemBase{
	private SparkMax motor = new SparkMax(1);
	private Encoder encoder = new Encoder();
	ExampleSubsystem(){}
	public runMotor(double power){
		motor.run(power);
	}
	public stop(){/*bat country*/}
	public go(){/*fish*/}
}

would be stored in Git as

class ExampleSubsystem extends SubsystemBase{
	private SparkMax motor = new SparkMax(1);
+	private Encoder encoder = new Encoder();
	ExampleSubsystem(){}
-	public runMotor(1){
-		motor.run(1);
+	public runMotor(double power){
+		motor.run(power);
	}
	public stop(){/*bat country*/}

With this difference, the changes we made are a bit more obvious. We can see precisely what we changed, and where we changed it.
We also see that some stuff is missing in our diff: the first comment is gone, and we don't see go or our closing brace. Those didn't change, so we don't need them in the commit.

However, there are some unchanged lines, near the changed lines. Git refers to these as "context". These help Git figure out what to do in some complex operations later. It's also helpful for us humans just taking a casual peek at things. As the name implies, it helps you figure out the context of that change.

We also see something interesting: When we "change" a line, Git actually

  • Marks it as deleted
  • Marks it as added
    Simply put, just removing a line and then adding the new one is just easier most of the time. However, some tools detect this, and will bold or highlight the specific bits of the line that changed.

Commits + Branches

Now that we have some changes in place, we want to "Commit" that change to Git, adding it to our project's history.

A commit in git is a just a bunch of changes, along with some extra data. The most relevant is

  • A commit "hash", which is a unique key representing that specific change set
  • The "parent" commit, which these changes are based on
  • The actual changes + files they belong to.
  • Date, time, and author information
  • A short human readable "description" of the commit.

These commits form a sequence, building on top from the earliest state of the project. We generally assign a name to these sequences, called "branches".

A typical project starts on the "main" branch, after a few commits, you'll end up with a nice, simple history like this.

mainnew empty projectAdded a subsystemAdded another subsystemadd commandsReady to go to competition

It's worth noting that a branch really is just a name that points to a commit, and is mostly a helpful book-keeping feature. The commits and commit chain do all the heavy lifting. Basically anything you can do with a branch can be done with a commit's hash instead!

Multiple Branches + Switching

We're now starting to get into Git's superpowers. You're not limited to just one branch. You can create new branches, switch to them, and then commit, to create commit chains that look like this:

maincompetitionnew empty projectAdded a subsystemAdded another subsystemadd commandsReady to go to competitionmess for qual 4mess for qual 8

Here we can see that mess for qual 4 and mess for qual 8 are built off the main branch, but kept as part of the competition branch. This means our main branch is untouched. We can now switch back and forth using git switch main and git switch competition to access the different states of our codebase.

We can, in fact, even continue working on main adding commits like normal.

maincompetitionnew empty projectAdded a subsystemAdded another subsystemadd commandsReady to go to competitionmess for qual 4mess for qual 8added optional sensor

Being able to have multiple branches like this is a foundational part of how Git works, and a key detail of it's collaborative model.

However, you might notice the problem: We currently can access the changes in competition or main, but not both at once.

Merging

Merging is what allows us to do that. It's helpful to think of merging the changes from another branch into your current branch.

If we merge competition into main, we get this. Both changes ready to go! Now main can access the competition branch's changes.

maincompetitionnew empty projectAdded a subsystemAdded another subsystemadd commandsReady to go to competitionmess for qual 4mess for qual 8added optional sensormerge comp into main

However, we can equally do main into competition, granting competition access to the changes in main.

maincompetitionnew empty projectAdded a subsystemAdded another subsystemadd commandsReady to go to competitionmess for qual 4mess for qual 8added optional sensormerge main into comp

Now that merging is a tool, we have unlocked the true power of git. Any set of changes is built on top of eachother, and we can grab changes without interrupting our existing code and any other changes we've been making!

This feature powers git's collaborative nature: You can pull in changes made by other people just as easily as you can your own. They just have to have the same parent somewhere up the chain so git can figure out how to step through the sequence of changes.

Branch Convention

Workspace, Staging, Origin

Git is a distributed system, and as such has a few different places that all these changes can live.

The most apparent one is your actual code on your laptop, forming the workspace. As far as you're concerned, this is just the files in the directory. However, Git sees them as the culmination of all changes committed in the current branch, plus any uncommitted changes.

The next one is "staging": This is just the incomplete next commit, and holes all the changes you've added as part of it. Once you properly commit these changes, your staging will be cleared, and you'll have a new commit in your tree.

It basically looks like this:

mainnew empty projectAdded a subsystemAdded another subsystemadd commandsReady to go to competitionstagingworkspace

Next is a "remote", representing a computer somewhere else. In most Git work, this is just Github. There's several commands focused on interacting with your remote, and this just facilitates collaborative work and offsite backup.

Handling Merge Conflicts

class ExampleSubsystem extends SubsystemBase{
	private SparkMax motor = new SparkMax(1);
+	private Encoder encoder = new Encoder();
	ExampleSubsystem(){}
-	public runMotor(1){
-		motor.run(1);
+	public runMotor(double power){
+		motor.run(power);
	}
	public stop(){/*bat country*/}

The critical commands

git init: This creates a new git repository for your current project. You want to run this in the base git add `

Git from VSCode

Other Git tools

There's a lot of tools that interact with your Git repository, but it's worth being mindful about which ones you pick! Many tools do unexpected

Git Basics

Success Criteria

  • Create a Singleton class
  • Use it in multiple places in your code

Summary

Singletons are a coding structure (or "pattern") that represents a unique entity. It's designed to allow one, and only one instance of a class.

This tends to be useful for controlling access to unique items like physical hardware, IO channels, and other such items.

The techniques used in this pattern are also helpful for cases where you might be fine with multiple instances, but you need to restrict the total number, or keep track in some way.

Bare Minimum Singleton pattern

public ExampleSingleton{
    private static ExampleSingleton instance;

	//note private constructor
    private ExampleSingleton(){}
    
    public static ExampleSingleton getInstance(){
        //Check to see if we have an instance; If not, create it. 
        if(instance==null) instance = new ExampleSingleton();
        //If so, return it. 
        return instance;
    }
    
	// Methods just work normally.
	public double exampleMethod(){
        return 0; 
    }
}

There's a few key details here:

  • private ExampleSingleton(){} The constructor is private, meaning you cannot create objects using new ExampleSingleton(). If you could do that, then you would create a second instance of the class! So, this is private, meaning only an instance of the class itself can create an instance.
  • public static ExampleSingleton getInstance() ; This does the heavy lifting: It sees if we have an instance, and if not, it actually creates one. If we have an instance, it just returns a reference to it. This is how we ensure we only ever create one instance of the class. This is static, which allows us to call it on the base class (since we won't have an instance until we do).
  • private static ExampleSingleton instance; This is the reference for the created instance. Notice that it's static, meaning that the instance is "owned" by the base class itself.

Example Sensor Singleton

public ExampleSensorSystem{
    private static ExampleSensorSystem instance;
    
    //Example object representing a physical object, belonging to
    //an instance of this class.
    //If we create more than one, our code will crash!
    //Fortunately, singletons prevent this. 
    private Ultrasonic sensor = new Ultrasonic(0,1);

    private ExampleSensorSystem(){} //note private constructor
    
    public static ExampleSensorSystem getInstance(){
        //Check to see if we have an instance; If not, create it. 
        if(instance==null) instance = new ExampleSensorSystem();
        //If so, return it. 
        return instance;
    }
    
    public double getDistance(){
        return sensor.getRangeInches();
    }
}

Elsewhere, these are all valid ways to interface with this sensor, and get the data we need

ExampleSensorSystem.getInstance().getDistance();


var sensor = ExampleSensorSystem.getInstance();
// do other things
sensor.getDistance();

When To Use Singletons

Rarely is often the right answer. While Singletons are useful in streamlining code in some circumstances, they also can obscure where you use it, and how you're using it. Here's the general considerations

  • You have something that is necessarily "unique"
  • It will be accessed by several other classes, or have complicated scope.
  • it is immutable: Once created, it won't be changed, altered or re-configured.
  • You will not have any code re-use

In cases where it's less obvious, the "dependency injection" pattern makes more sense. You'll see the Dependency pattern used in a lot of FRC code for subsystems. Even though these are unique, they're highly mutable, and we want to track access due to the Requires and lockouts.

Similarly, for sensors we probably one multiple of the same type. This means if we use a Singleton, we would have to re-write the code several times! (or get creative with class abstractions).

Dependency Injection

This pattern consists of passing a reference to items in a direct, explicit way, like so.

//We create an item
ExampleSubsystem example = new ExampleSubsystem();

ExampleCommand example = new ExampleCommand(exampleSubsystem);
class ExampleCommand(){
	ExampleSubsystem example;
	ExampleCommand(ExampleSubsystem example){
			this.example = example;
	}
	public void exampleMethod(){
		//has access to example subsystem
	}	
}
Singletons

Success Criteria

Synopsis

A pre-computed list of input and output values.

Can be used to help model non-trivial conditions where mathematical models are complicated, or don't apply effectively to the problem at hand.

Commonly used for modelling Superstructure Shooter

Lookup Tables

A Future is a simplified, and much more user friendly application of threading

Success Criteria

  • ???

Primer on Threads

A "thread" normally refers to a single chain of code being executed. Most code is "single threaded", meaning everything happens in order; For something to be done, it has to wait its turn.

With proper code setup, you can make it appear that code is doing multiple things at once. There's a few terms for this, but usually "concurrency" or "time sharing" come up here. However, you're still fundamentally waiting for other code to finish, and a slow part of code holds up everything. This might be a complex computation, or a slow IO transfer across a network or data bus.
Tasks like these don't take computational time, but do take real world time in which we could be doing other things.

Threads, on the other hand, can utilize additional processor cores to run code completely isolated and independently. Which is where the trouble starts.

Thread Safety

Threads come with a bit of inherent risk: Because things are happening asynchronously (as in, not in sync with each other), you can develop issues if things are not done when you expect them to be

//Set up two variables
var x;
var y;
//These two tasks are slow, so make a thrad for it!
Thread.spawn(()-> x=/*long computation for X*/)
Thread.spawn(()-> y=/*long computation for y*/)
//Sum things up!
var z = x+y

This will completely crash; It's unlikely that both threads A and B will have finished by the time the main thread tries to use their values. This example is obvious, but in practice, this can be very sneaky and difficult to pin down.

Actual example of this data race

In 2024, we had code managing Limelight data, which would

  • Check tv, the target valid data: This value means everything else is valid
  • Get tx and ty, along with getBotPose
  • Try to computer our pose
  • .... and data is wrong?

What happened was simply that in some cases, after checking tv to assert valid data, the data changed, causing our calculations to break. The remote system (effectively a different thread) changed the data underneath us.

In some cases, we'd get values that should be valid, but instead they resulted in crashes.

Dealing with those

There's lots of strategies to manage threads, most with notable downsides.

  • Avoiding threads: The easiest strategy, but you don't improve performance
  • Mutexes: Short for "mutually exclusive", and represents a lock. When using data shared with threads, you lock it, and unlock it when you're done. Notably, this means you spend a lot of time trying to deal with these locks.
  • Splits and joins: If a thread ends, you don't have problems! So, you can just check a thread state and see if it's done with your value. Don't forget to restart it if needed.
  • Message passing: Simply don't share data. Instead, just throw it in a queue, and let stuff handle it when it needs to.

There's other strategies as well, but this brings us to...

Futures

A Future combines several of those into one, very user friendly package. Conceptually, it represents a "future value" that has not yet been calculated, while actually containing the code to get that value.

Because it's oriented with this expectation, they're easy to think about and use. They're almost as straightforward as any other variable.

//create a future and pass it some work.
CompletableFuture<double> future = new CompletableFuture.supplyAsync( ()-> {Timer.delay(5); /*some long running calculaton*/ return 4;} );
System.out.println("waiting....");
System.out.println( future.get() )

That's it. For the simplicity involved, it doens't feel like you're using threads.... but you are. Notice that waiting prints out instantly; about 5 seconds before the number, in fact.

Futures handle most of the "busywork" for you; Managing thread operation, checking to see if it's done, and what the return value is. The thread runs in the background, but if it's not done by the time you get to future.get(), it'll automatically stop the main thread, wait until the future thread is done, get the value, and then resume the main thread. This will demonstrate it clearly. However, if the future is done, you just race on ahead.

//create a future and pass it some work.
CompletableFuture<double> future = new CompletableFuture.supplyAsync( ()-> {Timer.delay(5); /*some long running calculaton*/ return 4;} );
System.out.println("waiting....");
Timer.delay(6); // do some busywork on the main thread too
System.out.println("Done with main thread!");
System.out.println( future.get() ); //will print instantly; The thread finished during main thread's work!

Actually using them in FRC

Threads would be really nice in a few places, but in particular, building autos. Autos take a very long time to build, and you have a lot of them. And you don't want them wasting time if you're not actually running an auto.

But remember that Futures represent a "future value", and "contain the code to build it". A Command is a future value, and has a process to build it.... so it's a perfect fit. But you also have to select one of several autos. This is easily done:

CompletableFuture<Command> selectedAutoFuture = CompletableFuture.supplyAsync(this::doNothing);
SendableChooser<Supplier<Command>> autoChooser = new SendableChooser<>();

A full example is in /Programmer Guidance/auto-selection, but the gist is that

  • A Future takes a Supplier<Command>: A function that returns a command
  • The AutoChooser then has a list of functions that build and return an auto command.
  • When you change the chooser, you start a new future, and start building it.
  • If and when the auto process should start.... the code just waits for the process to finish as needed, and runs it.

Conveniently, you don't need to return values. You can, if needed, run the void version, using a Runnable or non-returning lambda.

CompletableFuture<?> voidedFuture = CompletableFuture.supplyAsync(()->{}); 
if(voidedFuture.isDone()) /* do a thing */ ;

While not exactly the intended use case, this allows you to easily run and monitor background code without worry.

Gotchyas

Be aware, that as with all threads you generally should not

  • Write to data accessible by other threads; You don't know when something is trying to read that value. Do writes in the main thread.
  • Read data being written to by other threads; This should be easy to reason about. Constants and fixed values are fine, but don't trust state variables.

Additionally, Futures are most effective when your code starts a computation, and then reacts to the completion of that computation afterward. They're intended for run-once use cases.

For long-running background threads, you'd want to use something else better suited to it.

Pseudo Threads

Psuedo-threads are "thread-like" code structures that look and feel like threads, but aren't really.

WPILib offers a convenient way to run psuedo-threads through the use of addPeriodic(). This registers a Runnable at a designated loop interval, but it's still within the thread safety of normal robot code.

For many cases, this can certain time-sensitive features, while mitigating the hazards of real threads.

Real Threads

Native Java Threads are a suitable way to continuously run background tasks that need to truly operate independent of the main thread. However, any time they interface with normal threads, you expose the hazard of data races or data corruption; Effectively, data changes underneath you, causing weird numerical glitches, or outright crashes.

In these cases, you need to meticulously manage access to the threaded data. Java has numerous built in helpers, but there's no shortcut for responsible coding.

Mutexes and Synchronized

The easiest way is use of the synchronized keyword in java; This is a function decorator (like public or static), which declares that a function

private double number=0;

public synchronized double increment(){
    number+=1;
}
public synchronized double double_increment(){
    number+=2;
}
// do some threads and run our code
public periodicThreadA(){ increment(); }
public periodicThreadB(){ double_increment(); }

This is it; If both A and B try to run increment simultaneously, it's thread will block until increment is accessable. Because of how we structure FRC code, this is often a perfectly suitable strategy; Any function trying to run a synchronized call has to wait until the other synchronized functions are done.

However, this comes with potential performance issues: The lock is actually protecting the base object (this), rather than the more narrow value of number. So all synchronized objects share one mutex; Meaning if you have multiple, independently updating values, they're blocking eachother needlessly.

We can get finer-grain control by use of structures like this:

private double number=0;
private Object numberLock = new Object(); 

public double increment(){
    synchronized (numberLock){
        number+=1;
    }
}

public double double_increment(){
    synchronized (numberLock){
        number+=2;
    }
}

// do some threads and run our code
public periodicThreadA(){ increment(); }
public periodicThreadB(){ double_increment(); }

This structure is identical, but now we've explicitly stated the mutex; We can see it's locking on the function increment, rather than the data we care about, which is number.

Note that in both cases, any access to number needs to go through a synchronized item.

Helpfully, you can clean this up for many common cases, as shown in the following example: Any Object class (any class or data structure; effectively everything but Int,Float, and boolean), can be locked directly; Avoiding a seperate mutex.However, we may want to develop a notation to demarcate thread-accessed objects like this.

private Pose2D currentPose = new Pose2D(); 

public double do_pose_things(){
    synchronized (currentPose){ //item can hold it's own thread mutex
        currentPose = new Pose2d();
    }
}

Queing and message passing

Message passing is another threading technique that allows threads to interact safely. You simply take your data, and toss it to another thread, where it can pick it up as it needs to.

SynchronousQueue is a useful and simple case; This is a queue optimized to interface handoffs between threads. Instead of suppliers adding values indirectly, this queue allows functions to directly block until the other thread arrives with the data it wants. This is useful when one side is significantly faster than the other, making the time spent waiting non-critical. There's methods for both fast suppliers with slow consumers, and fast consumers with slow suppliers.

SynchronousQueue<integer> queue = new SynchronousQueue<integer>;

public void fastSupplier(){ //ran at high speeds
    int value = 0; /*some value, such as quickly running sensor read*/
    queue.offer(value); //will not block; Will simply see there's no one listening, and give up
}
public void slowConsumer(){ //ran at low speeds
    int value = queue.take(); //will block this thread, waiting until fastSupplier tries to make another offer.
    //do something with the value
}

In most cases though, you want to keep track of all reported data, but the rate at which it's supplied doesn't always match the rate at which it's consumed. A good example is vision data for odometry. It might be coming in at 120FPS, or 0FPS. Even if it's coming in at the robot's 50hz, it's probably not exactly timed with the function.

Depending on the requirements, a ArrayBlockingQueue (First in First Out) or LinkedBlockingDeque (Last in First Out). These both have different uses, depending on the desired order.

ArrayBlockingQueue<Pose2d> queue = new ArrayBlockingQueue<Pose2d>();

public void VisionSupplier(){
    Optional<Pose2d> value = vision.getPoseFromAprilTags();
    if(value.isPresent(){
        if(queue.remainingCapacity() < 1) queue.poll() // delete the oldest item if we don't have space
        queue.offer(value); //add the newest value.
    }
}

public void VisionConsumer(){ //ran at low speeds
    var value = queue.take(); //grab the oldest value from the queue or block to wait for it
    odometry.update(value);
}

Message passing helps you manage big bursts of data, have threads block/wait for new data, but do introduce one problem: You have to make sure your code behaves well when your queue is full or empty.

In this case, it's sensible to just throw away the oldest value in our queue; We'll replace it with a more up-to-date one anyway.
We also block when trying to retrieve new data. This is fine for a dedicated thread, but when ran on our main thread this would cause our bot to halt if we drive away from a vision target. In that case, we'd want to check to see if there's a value first, or use poll() which returns null instead of waiting. The java docs can help you find the desired behavior for various operations.

Also be wary about the default sizes: By default, both queues can be infinitely large, meaning if your supplier is faster, you'll quickly run out of memory. Setting a maximum (reasonable) size is the best course of action.

Threading