tags:
- stub
aliases:
- Intake
Requires:
SuperStructure Rollers
Sensing Basics
Recommends:
State Machines
Requires as needed:
SuperStructure Rollers
SuperStructure Elevator
SuperStructure Arm
Intake complexity can range from very simple rollers that capture a game piece, to complex actuated systems intertwined with other scoring mechanisms.
A common "over the bumper" intake archetype is a deployed system that
The speed of deployment and retraction both impact cycle times, forming a critical competitive aspect of the bot.
The automatic detection and retraction provide cycle advantages (streamlining the driver experience), but also prevent fouls and damage due to the collisions on the deployed mechanism.
tags:
- stub
aliases:
- Rollers
- Roller
Requires:
Motor Control
Recommends:
FeedForwards
PID
tags:
- stub
Requires
Commands
Encoder Basics
TODO:
Add some graphs
https://github.com/DylanHojnoski/obsidian-graphs
Write synopsis
https://docs.revrobotics.com/revlib/spark/closed-loop
A PID system is a Closed Loop Controller designed to reduce system error through a simple, efficient mathematical approach.
You may also appreciate Chapter 1 and 2 from controls-engineering-in-frc.pdf , which covers PIDs very well.
To get an an intuitive understanding about PIDs and feedback loops, it can help to start from scratch, and kind of recreating it from the basic assumptions and simple code.
Let's start from the core concept of "I want this system to go to a position and stay there".
Initially, you might simply say "OK, if we're below the target position, go up. If we're above the target position, go down." This is a great starting point, with the following pseudo-code.
setpoint= 15 //your target position, in arbitrary units
sensor= 0 //Initial position
if(sensor < setpoint){ output = 1 }
else if(sensor > setpoint){ output = -1 }
motor.set(output)
However, you might see a problem. What happens when setpoint and sensor are equal?
If you responded with "It rapidly switches between full forward and full reverse", you would be correct. If you also thought "This sounds like it might damage things", then you'll understand why this controller is named a "Bang-bang" controller, due to the name of the noises it tends to make.
Your instinct for this might be to simply not go full power. Which doesn't solve the problem, but reduces it's negative impacts. But it also creates a new problem. Now it's going to oscillate at the setpoint (but less loudly), and it's also going to take longer to get there.
So, let's complicate this a bit. Let's take our previous bang-bang, but split the response into two different regions: Far away, and closer. This is easier if we introduce a new term: Error. Error just represents the difference between our setpoint and our sensor, simplifying the code and procedure. "Error" helpfully is a useful term, which we'll use a lot.
run(()->{
setpoint= 15 //your target position, in arbitrary units
sensor= 0 //read your sensor here
error = setpoint-sensor
if (error > 5){ output = -1 }
else if(error > 0){ output = -0.2 }
else if(error < 0){ output = 0.2 }
else if(error < -5){ output = 1 }
motor.set(output)
})
We've now slightly improved things; Now, we can expect more reasonable responses as we're close, and fast responses far away. But we still have the same problem; Those harsh transitions across each else if. Splitting up into more and more branches doesn't seem like it'll help. To resolve the problem, we'd need an infinite number of tiers, dependent on how far we are from our targets.
With a bit of math, we can do that! Our error
term tells us how far we are, and the sign tells us what direction we need to go... so let's just scale that by some value. Since this is a constant value, and the resulting output is proportional to this term, let's call it kp: Our proportional constant.
run(()->{
setpoint= 15 //your target position, in arbitrary units
sensor= 0 //read your sensor here
kp = 0.1
error = setpoint-sensor
output = error*kp
motor.set(output)
)}
Now we have a better behaved algorithm! At a distance of 10, our output is 1. At 5, it's half. When on target, it's zero! It scales just how we want.
Try this on a real system, and adjust the kP until your motor reliably gets to your setpoint, where error is approximately zero.
In doing so, you might notice that you can still oscillate around your setpoint if your gains are too high. You'll also notice that as you get closer, your output drops to zero. This means, at some point you stop being able to get closer to your target.
This is easily seen on an elevator system. You know that gravity pulls the elevator down, requiring the motor to push it back up. For the sake of example, let's say an output of 0.2 holds it up. Using our previous kP of 0.1, a distance of 2 generates that output of 0.2. If the distance is 1, we only generate 0.1... which is not enough to hold it! Our system actually is only stable below where we want. What gives!
This general case is referred to as "standing error" ; Every loop through our PID fails to reduce the error to zero, which eventually settles on a constant value. So.... what if.... we just add that error up over time? We can then incorporate that error into our outputs. Let's do it.
setpoint= 15 //your target position, in arbitrary units
errorsum=0
kp = 0.1
ki = 0.001
run(()->{
sensor= 0 //read your sensor here
error = setpoint-sensor
errorsum += error
output = error*kp + errorsum*ki
motor.set(output)
}
The mathematical operation involved here is called integration, which is what this term is called. That's the "I" in PID.
In many practical FRC applications, this is probably as far as you need to go! P and PI controllers can do a lot of work, to suitable precision. This a a very flexible, powerful controller, and can get "pretty good" control over a lot of mechanisms.
This is probably a good time to read across the WPILib PID Controller page; This covers several useful features. Using this built-in PID, we can reduce our previous code to a nice formalized version that looks something like this.
PIDController pid = new PIDController(kP, kI, kD);
run(()->{
sensor = motor.getEncoder.getPosition();
motor.set(pid.calculate(sensor, setpoint))
})
A critical detail in good PID controllers is the iZone. We can easily visualize what problem this is solving by just asking "What happens if we get a game piece stuck in our system"?
Well, we cannot get to our setpoint. So, our errorSum gets larger, and larger.... until our system is running full power into this obstacle. That's not great. Most of the time, something will break in this scenario.
So, the iZone allows you to constrain the amount of error the controller actually stores. It might be hard to visualize the specific numbers, but you can just work backward from the math. If output = errorsum*kI
, then maxIDesiredTermOutput=iZone*kI
. So iZone=maxIDesiredTermOutput/kI
.
Lastly, what's the D in PID?
Well, it's less intuitive, but let's try. Have you seen the large spike in output when you change a setpoint? Give the output a plot, if you so desire. For now, let's just reason through a system using the previous example PI values, and a large setpoint change resulting in an error of 20.
Your PI controller is now outputting a value of 2.0 ; That's double full power! Your system will go full speed immediately with a sharp jolt, have a ton of momentum at the halfway point, and probably overshoot the final target. So, what we want to do is constrain the speed; We want it fast but not too fast. So, we want to reduce it according to how fast we're going.
Since we're focusing on error as our main term, let's look at the rate the error changes. When the error is changing fast we want to reduce the output. The difference is simply defined as error-previousError
, so a similar strategy with gains gives us output+=kP*(error-previousError)
.
This indeed gives us what we want: When the rate of change is high, the contribution is negative and large; Acting to reduce the total output, slowing the corrective action.
However, this term has another secret power, which disturbance rejection. Let's assume we're at a steady position, and the system is settled, and error=0
. Now, let's bonk the system downward, giving us a positive error. Suddenly nonzero-0
is positive, and the system generates a upward force. For this interaction, all components of the PID are working in tandem to get things back in place.
OK, that's enough nice things. Understanding PIDs requires knowing when they work well, and when they don't, and when they actually cause problems.
So, how do you make the best use of PIDs?
In other words, this is an error correction mechanism, and if you avoid adding error to begin with, you more effectively accomplish the motions you want. Throwing a PID at a system can get things moving in a controlled fashion, but care should be taken to recognize that it's not intended as the primary control handler for systems.
aliases:
- Motion Profile
- Trapezoidal Profile
tags:
- stub
aliases:
- Indexer
- Passthrough
- Feeder
Superstructure component that adds additional control axes between intakes and scoring mechanisms. In practice, indexers often temporarily act as part of those systems at different points in time, as well performing it's own specialized tasks.
Common when handling multiple game pieces for storage and alignment, game pieces require re-orientation, adjustment or temporary storage, and for flywheel systems which need to isolate game piece motion from spinup.
Setting up an indexer is often a challenging process. It will naturally inherit several design goals and challenges from the systems it's connected to. This means it will often have a more complex API than most systems, often adopting notation from the connected systems.
The Indexer is often sensitive to hardware design quirks and changes from those adjacent systems, which can change their behavior, and thus the interfacing code.
Additionally, game piece handoffs can be mechanically complex, and imperfect. Often Indexers absorb special handling and fault detection, or at least bring such issues to light. Nominally, any such quirks are identified and hardware solutions implemented, or additional sensing is provided to facilitate code resolutions.
Indexers typically require some specific information about the system state, and tend to be a place where some sort of sensor ends up as a core operational component. The exact type and placement can vary by archtype, but often involve
Superstructure component that holds a large amount of kinetic energy at a high velocity. Typically paired with shooters.
A shooter is simply a flywheel and supporting infrastructure for making game pieces fly from a robot
Typically a "shooter" consists of
Advanced computation for calculating optimal shot angles and rpms
Interface with swerve for autos and non-trivial teleop interactions
Project should have a subsystem that
Interact with the PhotoVision UI and basic code structures
This allows you to access PhotonVision via the roborio USB port.
This can be useful when debugging at competitions
https://docs.photonvision.org/en/latest/docs/quick-start/networking.html
tags:
- stub
Understand how to efficiently communicate to and from a robot for diagnostics and control
tags:
- stub
Followup to:
Auto Differential
Swerve Motion
Do you need path planning to make great autos? Maybe! But not always.
PathPlanning can give you extremely fast, optimized autos, allowing you to squeeze every fraction of a second from your auto. However, it can be challenging to set up, and has a long list of requirements to get even moderate performance.
Unlike "path planning" algorithms that attempt to define and predict robot motion, Pure Pursuit simply acts as a reactive path follower, as the name somewhat implies.
This algorithm is fairly simple and conceptually straightforward, but with some notable limitations. However, the concept is very useful for advancing simpler autos
tags:
- stub
Part of:
SuperStructure Flywheel
FeedForwards
Understand the typical Git operations most helpful for day-to-day programming
This module is intended to be completed alongside other tasks.
main
branch, and create a new branch to represent that card.In general
Git is a "source control" tool intended to help you manage source code and other text data.
Git has many superpowers, but the basic level provides "version control"; This allows you to create "commits", which allow you to capture your code's state at a point in time. Once you have these commits, git lets you go back in time, compare to what you've done, and more.
Fundamental to Git is the concept of a "difference", or a diff for short. Rather than just duplicating your entire project each time you want to make a commit snapshot, Git actually just keeps track of what you've changed.
In a simplified view, updating this simple subsystem
/**Example class that does a thing*/
class ExampleSubsystem extends SubsystemBase{
private SparkMax motor = new SparkMax(1);
ExampleSubsystem(){}
public runMotor(){
motor.run(1);
}
public stop(){/*bat country*/}
public go(){/*fish*/}
}
to this
/**Example class that does a thing*/
class ExampleSubsystem extends SubsystemBase{
private SparkMax motor = new SparkMax(1);
private Encoder encoder = new Encoder();
ExampleSubsystem(){}
public runMotor(double power){
motor.run(power);
}
public stop(){/*bat country*/}
public go(){/*fish*/}
}
would be stored in Git as
class ExampleSubsystem extends SubsystemBase{
private SparkMax motor = new SparkMax(1);
+ private Encoder encoder = new Encoder();
ExampleSubsystem(){}
- public runMotor(1){
- motor.run(1);
+ public runMotor(double power){
+ motor.run(power);
}
public stop(){/*bat country*/}
With this difference, the changes we made are a bit more obvious. We can see precisely what we changed, and where we changed it.
We also see that some stuff is missing in our diff: the first comment is gone, and we don't see go or our closing brace. Those didn't change, so we don't need them in the commit.
However, there are some unchanged lines, near the changed lines. Git refers to these as "context". These help Git figure out what to do in some complex operations later. It's also helpful for us humans just taking a casual peek at things. As the name implies, it helps you figure out the context of that change.
We also see something interesting: When we "change" a line, Git actually
Now that we have some changes in place, we want to "Commit" that change to Git, adding it to our project's history.
A commit in git is a just a bunch of changes, along with some extra data. The most relevant is
These commits form a sequence, building on top from the earliest state of the project. We generally assign a name to these sequences, called "branches".
A typical project starts on the "main" branch, after a few commits, you'll end up with a nice, simple history like this.
It's worth noting that a branch really is just a name that points to a commit, and is mostly a helpful book-keeping feature. The commits and commit chain do all the heavy lifting. Basically anything you can do with a branch can be done with a commit's hash instead!
We're now starting to get into Git's superpowers. You're not limited to just one branch. You can create new branches, switch to them, and then commit, to create commit chains that look like this:
Here we can see that mess for qual 4
and mess for qual 8
are built off the main
branch, but kept as part of the competition
branch. This means our main
branch is untouched. We can now switch back and forth using git switch main
and git switch competition
to access the different states of our codebase.
We can, in fact, even continue working on main
adding commits like normal.
Being able to have multiple branches like this is a foundational part of how Git works, and a key detail of it's collaborative model.
However, you might notice the problem: We currently can access the changes in competition
or main
, but not both at once.
Merging is what allows us to do that. It's helpful to think of merging the changes from another branch into your current branch.
If we merge competition
into main
, we get this. Both changes ready to go! Now main can access the competition
branch's changes.
However, we can equally do main
into competition
, granting competition
access to the changes in main.
Now that merging is a tool, we have unlocked the true power of git. Any set of changes is built on top of eachother, and we can grab changes without interrupting our existing code and any other changes we've been making!
This feature powers git's collaborative nature: You can pull in changes made by other people just as easily as you can your own. They just have to have the same parent somewhere up the chain so git can figure out how to step through the sequence of changes.
Git is a distributed system, and as such has a few different places that all these changes can live.
The most apparent one is your actual code on your laptop, forming the workspace. As far as you're concerned, this is just the files in the directory. However, Git sees them as the culmination of all changes committed in the current branch, plus any uncommitted changes.
The next one is "staging": This is just the incomplete next commit, and holes all the changes you've added as part of it. Once you properly commit these changes, your staging will be cleared, and you'll have a new commit in your tree.
It basically looks like this:
Next is a "remote", representing a computer somewhere else. In most Git work, this is just Github. There's several commands focused on interacting with your remote, and this just facilitates collaborative work and offsite backup.
class ExampleSubsystem extends SubsystemBase{
private SparkMax motor = new SparkMax(1);
+ private Encoder encoder = new Encoder();
ExampleSubsystem(){}
- public runMotor(1){
- motor.run(1);
+ public runMotor(double power){
+ motor.run(power);
}
public stop(){/*bat country*/}
git init: This creates a new git repository for your current project. You want to run this in the base
git add `
There's a lot of tools that interact with your Git repository, but it's worth being mindful about which ones you pick! Many tools do unexpected
Singletons are a coding structure (or "pattern") that represents a unique entity. It's designed to allow one, and only one instance of a class.
This tends to be useful for controlling access to unique items like physical hardware, IO channels, and other such items.
The techniques used in this pattern are also helpful for cases where you might be fine with multiple instances, but you need to restrict the total number, or keep track in some way.
public ExampleSingleton{
private static ExampleSingleton instance;
//note private constructor
private ExampleSingleton(){}
public static ExampleSingleton getInstance(){
//Check to see if we have an instance; If not, create it.
if(instance==null) instance = new ExampleSingleton();
//If so, return it.
return instance;
}
// Methods just work normally.
public double exampleMethod(){
return 0;
}
}
There's a few key details here:
private ExampleSingleton(){}
The constructor is private, meaning you cannot create objects using new ExampleSingleton()
. If you could do that, then you would create a second instance of the class! So, this is private, meaning only an instance of the class itself can create an instance.public static ExampleSingleton getInstance()
; This does the heavy lifting: It sees if we have an instance, and if not, it actually creates one. If we have an instance, it just returns a reference to it. This is how we ensure we only ever create one instance of the class. This is static, which allows us to call it on the base class (since we won't have an instance until we do).private static ExampleSingleton instance;
This is the reference for the created instance. Notice that it's static
, meaning that the instance is "owned" by the base class itself. public ExampleSensorSystem{
private static ExampleSensorSystem instance;
//Example object representing a physical object, belonging to
//an instance of this class.
//If we create more than one, our code will crash!
//Fortunately, singletons prevent this.
private Ultrasonic sensor = new Ultrasonic(0,1);
private ExampleSensorSystem(){} //note private constructor
public static ExampleSensorSystem getInstance(){
//Check to see if we have an instance; If not, create it.
if(instance==null) instance = new ExampleSensorSystem();
//If so, return it.
return instance;
}
public double getDistance(){
return sensor.getRangeInches();
}
}
Elsewhere, these are all valid ways to interface with this sensor, and get the data we need
ExampleSensorSystem.getInstance().getDistance();
var sensor = ExampleSensorSystem.getInstance();
// do other things
sensor.getDistance();
Rarely is often the right answer. While Singletons are useful in streamlining code in some circumstances, they also can obscure where you use it, and how you're using it. Here's the general considerations
In cases where it's less obvious, the "dependency injection" pattern makes more sense. You'll see the Dependency pattern used in a lot of FRC code for subsystems. Even though these are unique, they're highly mutable, and we want to track access due to the Requires and lockouts.
Similarly, for sensors we probably one multiple of the same type. This means if we use a Singleton, we would have to re-write the code several times! (or get creative with class abstractions).
This pattern consists of passing a reference to items in a direct, explicit way, like so.
//We create an item
ExampleSubsystem example = new ExampleSubsystem();
ExampleCommand example = new ExampleCommand(exampleSubsystem);
class ExampleCommand(){
ExampleSubsystem example;
ExampleCommand(ExampleSubsystem example){
this.example = example;
}
public void exampleMethod(){
//has access to example subsystem
}
}
A pre-computed list of input and output values.
Can be used to help model non-trivial conditions where mathematical models are complicated, or don't apply effectively to the problem at hand.
Commonly used for modelling Superstructure Shooter
aliases:
- Threads
- Futures
A Future is a simplified, and much more user friendly application of threading
A "thread" normally refers to a single chain of code being executed. Most code is "single threaded", meaning everything happens in order; For something to be done, it has to wait its turn.
With proper code setup, you can make it appear that code is doing multiple things at once. There's a few terms for this, but usually "concurrency" or "time sharing" come up here. However, you're still fundamentally waiting for other code to finish, and a slow part of code holds up everything. This might be a complex computation, or a slow IO transfer across a network or data bus.
Tasks like these don't take computational time, but do take real world time in which we could be doing other things.
Threads, on the other hand, can utilize additional processor cores to run code completely isolated and independently. Which is where the trouble starts.
Threads come with a bit of inherent risk: Because things are happening asynchronously (as in, not in sync with each other), you can develop issues if things are not done when you expect them to be
//Set up two variables
var x;
var y;
//These two tasks are slow, so make a thrad for it!
Thread.spawn(()-> x=/*long computation for X*/)
Thread.spawn(()-> y=/*long computation for y*/)
//Sum things up!
var z = x+y
This will completely crash; It's unlikely that both threads A and B will have finished by the time the main thread tries to use their values. This example is obvious, but in practice, this can be very sneaky and difficult to pin down.
In 2024, we had code managing Limelight data, which would
tv
, the target valid data: This value means everything else is validtx
and ty
, along with getBotPose
What happened was simply that in some cases, after checking tv
to assert valid data, the data changed, causing our calculations to break. The remote system (effectively a different thread) changed the data underneath us.
In some cases, we'd get values that should be valid, but instead they resulted in crashes.
There's lots of strategies to manage threads, most with notable downsides.
There's other strategies as well, but this brings us to...
A Future combines several of those into one, very user friendly package. Conceptually, it represents a "future value" that has not yet been calculated, while actually containing the code to get that value.
Because it's oriented with this expectation, they're easy to think about and use. They're almost as straightforward as any other variable.
//create a future and pass it some work.
CompletableFuture<double> future = new CompletableFuture.supplyAsync( ()-> {Timer.delay(5); /*some long running calculaton*/ return 4;} );
System.out.println("waiting....");
System.out.println( future.get() )
That's it. For the simplicity involved, it doens't feel like you're using threads.... but you are. Notice that waiting prints out instantly; about 5 seconds before the number, in fact.
Futures handle most of the "busywork" for you; Managing thread operation, checking to see if it's done, and what the return value is. The thread runs in the background, but if it's not done by the time you get to future.get()
, it'll automatically stop the main thread, wait until the future thread is done, get the value, and then resume the main thread. This will demonstrate it clearly. However, if the future is done, you just race on ahead.
//create a future and pass it some work.
CompletableFuture<double> future = new CompletableFuture.supplyAsync( ()-> {Timer.delay(5); /*some long running calculaton*/ return 4;} );
System.out.println("waiting....");
Timer.delay(6); // do some busywork on the main thread too
System.out.println("Done with main thread!");
System.out.println( future.get() ); //will print instantly; The thread finished during main thread's work!
Threads would be really nice in a few places, but in particular, building autos. Autos take a very long time to build, and you have a lot of them. And you don't want them wasting time if you're not actually running an auto.
But remember that Futures represent a "future value", and "contain the code to build it". A Command is a future value, and has a process to build it.... so it's a perfect fit. But you also have to select one of several autos. This is easily done:
CompletableFuture<Command> selectedAutoFuture = CompletableFuture.supplyAsync(this::doNothing);
SendableChooser<Supplier<Command>> autoChooser = new SendableChooser<>();
A full example is in /Programmer Guidance/auto-selection, but the gist is that
Supplier<Command>
: A function that returns a commandConveniently, you don't need to return values. You can, if needed, run the void version, using a Runnable or non-returning lambda.
CompletableFuture<?> voidedFuture = CompletableFuture.supplyAsync(()->{});
if(voidedFuture.isDone()) /* do a thing */ ;
While not exactly the intended use case, this allows you to easily run and monitor background code without worry.
Be aware, that as with all threads you generally should not
Additionally, Futures are most effective when your code starts a computation, and then reacts to the completion of that computation afterward. They're intended for run-once use cases.
For long-running background threads, you'd want to use something else better suited to it.
Psuedo-threads are "thread-like" code structures that look and feel like threads, but aren't really.
WPILib offers a convenient way to run psuedo-threads through the use of addPeriodic()
. This registers a Runnable at a designated loop interval, but it's still within the thread safety of normal robot code.
For many cases, this can certain time-sensitive features, while mitigating the hazards of real threads.
Native Java Threads are a suitable way to continuously run background tasks that need to truly operate independent of the main thread. However, any time they interface with normal threads, you expose the hazard of data races or data corruption; Effectively, data changes underneath you, causing weird numerical glitches, or outright crashes.
In these cases, you need to meticulously manage access to the threaded data. Java has numerous built in helpers, but there's no shortcut for responsible coding.
The easiest way is use of the synchronized
keyword in java; This is a function decorator (like public
or static
), which declares that a function
private double number=0;
public synchronized double increment(){
number+=1;
}
public synchronized double double_increment(){
number+=2;
}
// do some threads and run our code
public periodicThreadA(){ increment(); }
public periodicThreadB(){ double_increment(); }
This is it; If both A and B try to run increment
simultaneously, it's thread will block until increment
is accessable. Because of how we structure FRC code, this is often a perfectly suitable strategy; Any function trying to run a synchronized
call has to wait until the other synchronized functions are done.
However, this comes with potential performance issues: The lock is actually protecting the base object (this
), rather than the more narrow value of number
. So all synchronized
objects share one mutex; Meaning if you have multiple, independently updating values, they're blocking eachother needlessly.
We can get finer-grain control by use of structures like this:
private double number=0;
private Object numberLock = new Object();
public double increment(){
synchronized (numberLock){
number+=1;
}
}
public double double_increment(){
synchronized (numberLock){
number+=2;
}
}
// do some threads and run our code
public periodicThreadA(){ increment(); }
public periodicThreadB(){ double_increment(); }
This structure is identical, but now we've explicitly stated the mutex; We can see it's locking on the function increment
, rather than the data we care about, which is number
.
Note that in both cases, any access to number
needs to go through a synchronized
item.
Helpfully, you can clean this up for many common cases, as shown in the following example: Any Object class (any class or data structure; effectively everything but Int,Float, and boolean), can be locked directly; Avoiding a seperate mutex.However, we may want to develop a notation to demarcate thread-accessed objects like this.
private Pose2D currentPose = new Pose2D();
public double do_pose_things(){
synchronized (currentPose){ //item can hold it's own thread mutex
currentPose = new Pose2d();
}
}
Message passing is another threading technique that allows threads to interact safely. You simply take your data, and toss it to another thread, where it can pick it up as it needs to.
SynchronousQueue
is a useful and simple case; This is a queue optimized to interface handoffs between threads. Instead of suppliers adding values indirectly, this queue allows functions to directly block until the other thread arrives with the data it wants. This is useful when one side is significantly faster than the other, making the time spent waiting non-critical. There's methods for both fast suppliers with slow consumers, and fast consumers with slow suppliers.
SynchronousQueue<integer> queue = new SynchronousQueue<integer>;
public void fastSupplier(){ //ran at high speeds
int value = 0; /*some value, such as quickly running sensor read*/
queue.offer(value); //will not block; Will simply see there's no one listening, and give up
}
public void slowConsumer(){ //ran at low speeds
int value = queue.take(); //will block this thread, waiting until fastSupplier tries to make another offer.
//do something with the value
}
In most cases though, you want to keep track of all reported data, but the rate at which it's supplied doesn't always match the rate at which it's consumed. A good example is vision data for odometry. It might be coming in at 120FPS, or 0FPS. Even if it's coming in at the robot's 50hz, it's probably not exactly timed with the function.
Depending on the requirements, a ArrayBlockingQueue
(First in First Out) or LinkedBlockingDeque
(Last in First Out). These both have different uses, depending on the desired order.
ArrayBlockingQueue<Pose2d> queue = new ArrayBlockingQueue<Pose2d>();
public void VisionSupplier(){
Optional<Pose2d> value = vision.getPoseFromAprilTags();
if(value.isPresent(){
if(queue.remainingCapacity() < 1) queue.poll() // delete the oldest item if we don't have space
queue.offer(value); //add the newest value.
}
}
public void VisionConsumer(){ //ran at low speeds
var value = queue.take(); //grab the oldest value from the queue or block to wait for it
odometry.update(value);
}
Message passing helps you manage big bursts of data, have threads block/wait for new data, but do introduce one problem: You have to make sure your code behaves well when your queue is full or empty.
In this case, it's sensible to just throw away the oldest value in our queue; We'll replace it with a more up-to-date one anyway.
We also block when trying to retrieve new data. This is fine for a dedicated thread, but when ran on our main thread this would cause our bot to halt if we drive away from a vision target. In that case, we'd want to check to see if there's a value first, or use poll()
which returns null
instead of waiting. The java docs can help you find the desired behavior for various operations.
Also be wary about the default sizes: By default, both queues can be infinitely large, meaning if your supplier is faster, you'll quickly run out of memory. Setting a maximum (reasonable) size is the best course of action.