+ +## Program design +Like all ideas I have, I wrote some code to test this idea out. Above is a small sample of the interesting designs found in the [gimp](https://www.gimp.org/) binary. The goals for this script were to: + + - Accept any file of any type or size + - Allow the user to select the file dimensions + - Generate an image + - Write the data in a common image format + +If you would like to see how the code works, read "*check out the script*". + +## A note on data wrapping +By using a [generator](https://wiki.python.org/moin/Generators), and the [range function](https://docs.python.org/3/library/functions.html#func-range)'s 3rd argument, any list can be easily split into a 2d list at a set interval. + +```python +# Assuming l is a list of data, and n is an int that denotes the desired split location +for i in range(0, len(l), n): + yield l[i:i + n] +``` + +### Binaries have a habit of not being rectangular +Unlike photos, binaries are not generated from rectangular image sensors, but instead from compilers and assemblers (and sometimes hand-written binary). These do not generate perfect rectangles. Due to this, my script simply removes the last line from the image to "reshape" it. I may end up adding a small piece of code to pad the final line instead of stripping it in the future. + +## Other file types +I also looked at other file types. Binaries are very interesting because they follow very strict ordering rules. I was hoping that a `wav` file would do something similar, but that does not appear to be the case. This is the most interesting pattern I could find in a `wav` file: + +I present: "Parts of @GIMP_Official's binary, represented as a bitmap" pic.twitter.com/iLljdE4nlK
— Evan Pratten (@ewpratten) September 11, 2019
+ +Back to executable data, these are small segments of a `dll` file: + + + + + +## Check out the script +This script is hosted [on my GitHub account](https://github.com/Ewpratten/binmap) as a standalone file. Any version of python3 should work, but the following libraries are needed: + + - Pillow + - Numpy diff --git a/src/collections/_posts/2019-10-05-billwurtz.md b/src/collections/_posts/2019-10-05-billwurtz.md new file mode 100644 index 0000000..e738ec9 --- /dev/null +++ b/src/collections/_posts/2019-10-05-billwurtz.md @@ -0,0 +1,97 @@ +--- +layout: default +title: Using an RNN to generate Bill Wurtz notes +description: Textgenrnn is fun +date: 2019-10-05 +tags: +- project +- walkthrough +- python +redirect_from: +- /post/99g9j2r90/ +- /99g9j2r90/ +aliases: +- /blog/2019/10/05/billwurtz +- /blog/billwurtz +--- + +[Bill Wurtz](https://billwurtz.com/) is an American musician who became [reasonably famous](https://socialblade.com/youtube/user/billwurtz/realtime) through short musical videos posted to Vine and YouTube. I was searching through his website the other day, and stumbled upon a page labeled [*notebook*](https://billwurtz.com/notebook.html), and thought I should check it out. + +Bill's notebook is a large (about 580 posts) collection of random thoughts, ideas, and sometimes just collections of words. A prime source of entertainment, and neural network inputs.. + +> *"If you are looking to burn something, fire may be just the ticket"* - Bill Wurtz + +## Choosing the right tool for the job +If you haven't noticed yet, Im building a neural net to generate notes based on his writing style and content. Anyone who has read [my first post](@/blog/2018-06-27-BecomeRanter.md) will know that I have already done a similar project in the past. This means *time to reuse come code*! + +For this project, I decided to use an amazing library by @minimaxir called [textgenrnn](https://github.com/minimaxir/textgenrnn). This Python library will handle all of the heavy (and light) work of training an RNN on a text dataset, then generating new text. + +## Building a dataset +This project was a joke, so I didn't bother with properly grabbing each post, categorizing them, and parsing them. Instead, I build a little script to pull every HTML file from Bill's website, and regex out the body. This ended up leaving some artifacts in the output, but I don't really mind. + +```python +import re +import requests + + +def loadAllUrls(): + page = requests.get("https://billwurtz.com/notebook.html").text + + links = re.findall(r"HREF=\"(.*)\"style", page) + + return links + + +def dumpEach(urls): + for url in urls: + page = requests.get(f"https://billwurtz.com/{url}").text.strip().replace( + "", "").replace("Following up my previous post with a tiny segment of an audio file. This one is little less interesting pic.twitter.com/u9EFloxnK5
— Evan Pratten (@ewpratten) September 11, 2019
+ +..and so we started. + +Day 0 was spend on three tasks: + - Deciding the story for our game + - Allocating tasks + - Building a software framework for the game + +We decided to program our game in JavaScript (but not without an argument about types) because that is @rsninja722's primary language, and we can use his JS game engine, [game.js](https://github.com/rsninja722/game.js). On top of that, we also decided to use [SASS](https://sass-lang.com/) for styling, and I designed [a CSS injector](https://github.com/rsninja722/LudumDare46/blob/master/docs/assets/js/injection/cssinjector.js) that allows us to share variables between JS and SASS. + +After task allocation, I took on the job of handling sounds and sound loading for the game. I decided to start work on that during day 1, due to homework. + +*The game's progress at the end of Day 0 can be found at commit [0b4a1cd](https://github.com/rsninja722/LudumDare46/tree/0b4a1cdb92e62ff0f9453f6f169f641dd82e8f09)* + + +## Day 1 + +---- + +Day 1 started with @exvacuum developing a heartrate monitor system for the game: + + + +*Demo image showing off his algorithm* + +His progress was documented [on his YouTube channel](https://www.youtube.com/watch?v=oqcbO8x0evY). + +I also started out by writing a sound system that uses audio channels to separate sounds. This system pre-caches all sounds while the game loads. Unfortunately, after getting my branch merged into master, I noticed a few bugs: + - When queueing audio, the 2 most recent requests are always ignored + - Some browsers do not allow me to play multiple audio streams at the same time + +Due to these issues, I decided to rewrite the audio backend to use [Howler.js](https://howlerjs.com/). I streamed this rewrite [on Twitch](https://www.twitch.tv/videos/595864066). The Howler rewrite was very painless, and made a much nicer interface for playing audio assets. + +```javascript +// The old way +globalSoundContext.playSound(globalSoundContext.channels.bgm, soundAssets.debug_ding); + +// The new way +soundAssets.debug_ding.play(); +``` + +This rewrite also added integration with the volume control sliders in the game settings menu: + + + +*Audio Settings screen* + +Later on in the day, a basic HUD was designed to incorporate the game elements. A bug was also discovered that causes Firefox-based clients to not render the background fill. We decided to replace the background fill with an image later. + + + +*V1 of the game HUD* + +While developing the sound backend, and tweaking UI, I added sound assets for heartbeats, and footsteps. World assets were also added, and the walking system was improved. + + + +*The game with basic world assets loaded* + +@wm-c and @rsninja722 also spent time developing the game's tutorial mode. + +*The game's progress at the end of Day 1 can be found at commit [84d8438](https://github.com/rsninja722/LudumDare46/tree/84d843880f052fd274d2d14036220e6b591e9ec3)* + +## Day 2 & 3 + +---- + + +Day 2 started with a new background asset, and a new HUD design: + + + +*The game's new background* + + + +*The game's new HUD* + +@rsninja722 also got to work on updating the game's collisions based on the new assets, while I added more sounds to the game (again, streaming this process [on Twitch](https://www.twitch.tv/videos/596589171)). + +From then on, development time was just spent tweaking things such as: + - A Chrome sound bug + - A transition bug when moving from the loading screen to tutorial + - Some collision bugs + - Adding a new credits screen + +*The game's progress at the end of Day 2 can be found at commit [b9d758f](https://github.com/rsninja722/LudumDare46/tree/b9d758f4172f2ca251da6f60af713888ef28b5fe)* + +## The Game + +Micromanaged Mike is free to play on [@rsninj722's website](https://rsninja.dev/LudumDare46/). + + + +*Final game screenshot* diff --git a/src/collections/_posts/2020-05-19-running-roborio-native.md b/src/collections/_posts/2020-05-19-running-roborio-native.md new file mode 100644 index 0000000..ff7b5a5 --- /dev/null +++ b/src/collections/_posts/2020-05-19-running-roborio-native.md @@ -0,0 +1,203 @@ +--- +layout: default +title: Running RoboRIO firmware inside Docker +description: Containerized native ARMv7l emulation in 20 minutes +date: 2020-05-19 +tags: frc roborio emulation +redirect_from: +- /post/5d3nd9s4/ +- /5d3nd9s4/ +aliases: +- /blog/2020/05/19/running-roborio-native +- /blog/running-roborio-native +extra: + excerpt: This post covers how to run a RoboRIO's operating system in Docker +--- + +It has now been 11 weeks since the last time I have had access to a [RoboRIO](https://www.ni.com/en-ca/support/model.roborio.html) to use for debugging code, and there are limits to my simulation software. So, I really only have one choice: *emulate my entire robot*. + +My goal is to eventually have every bit of hardware on [5024](https://www.thebluealliance.com/team/5024)'s [Darth Raider](https://cs.5024.ca/webdocs/docs/robots/darthRaider) emulated, and running on my docker swarm. Conveniently, everything uses (mostly) the same CPU architecture. In this post, I will go over how to build a RoboRIO docker container. + +## Host system requirements + +This process requires a host computer with: + - An x86_64 CPU + - A decent amount of RAM + - [Ubuntu 18.04](https://mirrors.lug.mtu.edu/ubuntu-releases/18.04/) or later + - [Docker CE](https://docs.docker.com/engine/install/debian/) installed + - [docker-compose](https://docs.docker.com/compose/install/) installed + +## Getting a system image + +This is the hardest step. To get a RoboRIO docker container running, you will need: + - A copy of the latest RoboRIO firmware package + - A copy of `libfakearmv7l.so` ([download](https://github.com/robotpy/fakearmv7l/releases/download/v1/libfakearmv7l.so)) + +### RoboRIO Firmware + +To acquire a copy of the latest RoboRIO Firmware package, you will need to install the [FRC Game Tools](https://www.ni.com/en-ca/support/downloads/drivers/download.frc-game-tools.html) on a **Windows** machine (not wine). + +After installing the toolsuite, and activating it with your FRC team's activation key (provided in Kit of Parts), you can grab the latest `FRC_roboRIO_XXXX_vXX.zip` file from the installation directory of the *FRC Imaging Tool* (This will vary depending on how, and where the Game Tools are installed). + +After unzipping this file, you will find another ZIP file, and a LabVIEW FPGA file. Unzip the ZIP, and look for a file called `systemimage.tar.gz`. This is the RoboRIO system image. Copy it to your Ubuntu computer. + +## Bootstrapping + +The bootstrap process is made up of a few parts: + + 1. Enabling support for ARM-based docker containers + 2. Converting the RoboRIO system image to a Docker base image + 3. Building a Dockerfile with hacked auth + +### Enabling Docker-ARM support + +Since the RoboRIO system image and libraries are compiled to run on ARMv7l hardware, they will refuse to run on an x86_64 system. This is where [QEMU](https://www.qemu.org/) comes in to play. We can use QEMU as an emulation layer between out docker containers and our CPU. To get QEMU set up, we must first install support for ARM->x86 emulation by running: + +```sh +sudo apt install qemu binfmt-support qemu-user-static -y +``` + +Once QEMU has been installed, we must run the registration scripts with: + +```sh +docker run --rm --privileged multiarch/qemu-user-static --reset -p yes +``` + +### Converting the system image to a Docker base + +We have a system image filesystem, but need Docker to view it as a Docker image. + +#### Using my pre-built image + +Feel free to skip the following step, and just use my [pre-built](https://hub.docker.com/r/ewpratten/roborio) RoboRIO base image. It is already set up with hacked auth, and is (at the time of writing) based on firmware version `2020_v10`. + +To use it, replace `roborio:latest` with `ewpratten/roborio:2020_v10` in the `docker-compose.yml` config below. + +#### Building your own image + +Make a folder, and put both the system image, and `libfakearmv7l.so` files in it. This will be your "working directory". Now, import the system image into docker with: + +```sh +docker import ./systemimage.tar.gz roborio:tmp +``` + +This will build a docker base image out of the system image, and name it `roborio:tmp`. You can use this on it's own, but if you want to deploy code to the container with [GradleRIO](https://github.com/wpilibsuite/GradleRIO), or SSH into the container, you will need to strip the NI Auth. + +### Stripping National Instruments Auth + +By default, the RoboRIO system image comes fairly locked down. To fix this, we can "extend" our imported docker image with some configuration to allow us to remove some unknown passwords. + +In the working directory, we must first create a file called `common_auth`. This will store our modified authentication configuration. Add the following to the file: + +``` +# +# /etc/pam.d/common-auth - authentication settings common to all services +# +# This file is included from other service-specific PAM config files, +# and should contain a list of the authentication modules that define +# the central authentication scheme for use on the system +# (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the +# traditional Unix authentication mechanisms. + +# ~~~ This file is modified for use with Docker ~~~ + +# here are the per-package modules (the "Primary" block) +# auth [success=2 auth_err=1 default=ignore] pam_niauth.so nullok +# auth [success=1 default=ignore] pam_unix.so nullok +# here's the fallback if no module succeeds +# auth requisite pam_deny.so +# prime the stack with a positive return value if there isn't one already; +# this avoids us returning an error just because nothing sets a success code +# since the modules above will each just jump around +auth required pam_permit.so +# and here are more per-package modules (the "Additional" block) + +``` + +Now, we must create a `Dockerfile` in the same directory with the following contents: + +``` +FROM roborio:tmp + +# Fixes issues with the original RoboRIO image +RUN mkdir -p /var/volatile/tmp && \ + mkdir -p /var/volatile/cache && \ + mkdir -p /var/volatile/log && \ + mkdir -p /var/run/sshd + +RUN opkg update && \ + opkg install binutils-symlinks gcc-symlinks g++-symlinks libgcc-s-dev make libstdc++-dev + +# Overwrite auth +COPY system/common_auth /etc/pam.d/common-auth +RUN useradd admin -ou 0 -g 0 -s /bin/bash -m +RUN usermod -aG sudo admin + +# Fixes for WPILib +RUN mkdir -p /usr/local/frc/third-party/lib +RUN chmod 777 /usr/local/frc/third-party/lib + +# This forces uname to report armv7l +COPY system/libfakearmv7l.so /usr/local/lib/libfakearmv7l.so +RUN chmod +x /usr/local/lib/libfakearmv7l.so && \ + mkdir -p /home/admin/.ssh && \ + echo "LD_PRELOAD=/usr/local/lib/libfakearmv7l.so" >> /home/admin/.ssh/environment && \ + echo "PermitUserEnvironment yes" >> /etc/ssh/sshd_config && \ + echo "PasswordAuthentication no">> /etc/ssh/sshd_config + +# Put the CPU into 32bit mode, and start an SSH server +ENTRYPOINT ["setarch", "linux32", "&&", "/usr/sbin/sshd", "-D" ] +``` + +This file will cause the container to: + - Install needed tools + - Configure an "admin" user with full permissions + - Set r/w permissions for all FRC libraries + - Overwrite the system architecture with a custom string to allow programs like `pip` to run properly + - Enables password-less SSH login + - Sets the CPU to 32bit mode + +We can now build the final image with these commands: + +```sh +docker build -f ./Dockerfile -t roborio:local . +docker rmi roborio:tmp +docker tag roborio:local roborio:latest +``` + +## Running the RoboRIO container locally + +We can now use `docker-compose` to start a fake robot network locally, and run our RoboRIO container. First, we need to make a `docker-compose.yml` file. In this file, add: + +```yml +version: "3" + +services: + + roborio: + image: roborio:latest # Change this to "ewpratten/roborio:2020_v10" if using my pre-built image + networks: + robo_net: + ipv4_address: 10.50.24.2 + +networks: + robo_net: + ipam: + driver: default + config: + - subnet: 10.50.24.0/24 +``` + +We can now start the RoboRIO container by running + +```sh +docker-compose up +``` + +You should now be able to SSH into the RoboRIO container with: + +```sh +ssh admin@10.50.24.2 +``` + +Or even deploy code to the container! (Just make sure to set your FRC team number to `5024`) diff --git a/src/collections/_posts/2020-08-03-joystick-to-voltage.md b/src/collections/_posts/2020-08-03-joystick-to-voltage.md new file mode 100644 index 0000000..b3a6627 --- /dev/null +++ b/src/collections/_posts/2020-08-03-joystick-to-voltage.md @@ -0,0 +1,104 @@ +--- +layout: default +title: 'Notes from FRC: Converting joystick data to tank-drive outputs' +description: and making a tank-based robot's movements look natural +date: 2020-08-03 +enable_katex: true +--- + +I am starting a new little series here called "Notes from FRC". The idea is that I am going to write about what I have learned over the past three years of working (almost daily) with robots, and hopefully someone in the future will find them useful. The production source code I based this post around is available [here](https://github.com/frc5024/lib5k/blob/cd8ad407146b514cf857c1d8ac82ac8f3067812b/common_drive/src/main/java/io/github/frc5024/common_drive/calculation/DifferentialDriveCalculation.java). + +Today's topic is quite simple, yet almost nobody has written anything about it. One of the very first problems presented to you when working with an FRC robot is: *"I have a robot, and I have a controller.. How do I make this thing move?"*. When I first started as a software developer at *Raider Robotics*, I decided to do some Googling, as I was sure someone would have at least written about this from the video-game industry.. Nope. + +Let's lay out the problem. We have an application that needs to run some motors from a joystick input. Periodically, we are fed a vector of joystick data, $\lbrack\begin{smallmatrix}T \\ S\end{smallmatrix}\rbrack$, where the values follow $-1\leq \lbrack\begin{smallmatrix}T \\ S\end{smallmatrix}\rbrack \leq 1$. $T$ denotes our *throttle* input, and $S$ denotes something we at Raider Robotics call *"rotation"*. As you will see later on, rotation is not quite the correct word, but none of us can come up with anything better. Some teams, who use a steering wheel as input instead of a joystick, call this number *wheel*, which makes sense in their context. For every time an input is received, we must also produce an output, $\lbrack\begin{smallmatrix}L \\ R\end{smallmatrix}\rbrack$, where the values follow $-12\leq \lbrack\begin{smallmatrix}L \\ R\end{smallmatrix}\rbrack \leq 12$. $\lbrack\begin{smallmatrix}L \\ R\end{smallmatrix}\rbrack$ is a vector containing *left* and *right* side motor output voltages respectively. Since we build [tank-drive](https://en.wikipedia.org/wiki/Tank_steering_systems)-style robots, when $\lbrack\begin{smallmatrix}L \\ R\end{smallmatrix}\rbrack = \lbrack\begin{smallmatrix}12 \\ 12\end{smallmatrix}\rbrack$, the robot would be moving forward at full speed, and when $\lbrack\begin{smallmatrix}L \\ R\end{smallmatrix}\rbrack = \lbrack\begin{smallmatrix}12 \\ 0\end{smallmatrix}\rbrack$, the robot would be pivoting right around the centre of its right track at full speed. The simplest way to convert a throttle and rotation input to left and right voltages is as follows: + +$$ +output = 12\cdot\begin{bmatrix}T + S \\ T - S\end{bmatrix} +$$ + +This can be expressed in Python as: + +```python +def computeMotorOutputs(T: float, S: float) -> Tuple[float, float]: + return (12 * (T + S), 12 * (T - S)) +``` + +In FRC, we call this method "arcade drive", since the controls feel like you are driving a tank in an arcade game. Although this is very simple, there is a big drawback. At high values of $T$ and $S$, precision is lost. The best solution I have seen to this problem is to divide both $L$ and $R$ by the result of $\max(abs(T), abs(S))$ if the resulting value is greater than $1.0$. With this addition, the compute function now looks like this: + +```python +def computeMotorOutputs(T: float, S: float) -> Tuple[float, float]: + # Calculate normal arcade values + L = 12 * (T + S) + R = 12 * (T - S) + + # Determine maximum output + m = max(abs(T), abs(S)) + + # Scale if needed + if m > 1.0: + L /= m + R /= m + + return (L, R) +``` + +Perfect. Now we have solved the problem! + +Of course, I'm not stopping here. Although arcade drive works, the result is not great. Small movements are very hard to get right, as a small movement on your controller will translate to a fairly large one on the robot (on an Xbox controller, we are fitting the entire range of 0m/s to 5m/s in about half an inch of joystick movement). This is generally tolerable when moving forward and turning, but when sitting still, it is near impossible to make precise rotational movements. Also, unless you have a lot of practice driving tank-drive vehicles, sharp turns are a big problem, as overshooting and skidding are very common. Wouldn't it be nice if we could have a robot that manuevers in graceful curves like a car? This is where the second method of joystick-to-voltage conversion comes in to play. + +FRC teams like [254](https://www.team254.com/) and [971](https://frc971.org/) use variations of this calculation method called *"constant curvature drive"*. Curvature drive is only slightly different from arcade drive. Here is the new formula: + +$$ +output = 12\cdot\begin{bmatrix}T + abs(T) \cdot S \\ T - abs(T) \cdot S\end{bmatrix} +$$ + +If we also add the speed scaling from arcade drive, we are left with the following Python code: + +```python +def computeMotorOutputs(T: float, S: float) -> Tuple[float, float]: + # Calculate normal curvature values + L = 12 * (T + abs(T) * S) + R = 12 * (T - abs(T) * S) + + # Determine maximum output + m = max(abs(T), abs(S)) + + # Scale if needed + if m > 1.0: + L /= m + R /= m + + return (L, R) +``` + +The $S$ component now changes the curvature of the robot's path, rather than the heading's rate of change. This makes the robot much more controllable at high speeds. There is one downside to this method though. As a tradeoff to making high-speed driving much more controllable, we have completely removed the robot's ability to turn when stopped. + +This is where the final drive method comes in to play. At Raider Robotics, we call it *"semi-constant curvature drive"*, and have been using it in gameplay with great success since 2019. Since we want to take the best parts of arcade drive and constant curvature drive, we came to the simple conclusion that we should just average the two methods. Doing this results in this new formula: + +$$ +output = 12\cdot\begin{bmatrix}\frac{(T + abs(T) * S) + (T + S)}{2} \\ \frac{(T - abs(T) * S) + (T - S)}{2}\end{bmatrix} +$$ + +And here is the associated Python code: + + +```python +def computeMotorOutputs(T: float, S: float) -> Tuple[float, float]: + # Calculate semi-constant curvature values + L = 12 * (((T + abs(T) * S) + (T + S)) / 2) + R = 12 * (((T - abs(T) * S) + (T - S)) / 2) + + # Determine maximum output + m = max(abs(T), abs(S)) + + # Scale if needed + if m > 1.0: + L /= m + R /= m + + return (L, R) +``` + +--- + +I hope someone will some day find this post helpful. I am working on a few more FRC-related posts about more advanced topics, and things I have learned through my adventures at Raider Robotics. If you would like to check out the code that powers all of this, take a look at our core software library: [Lib5K](https://github.com/frc5024/lib5k) diff --git a/src/collections/_posts/2020-08-13-drivetrain-navigation.md b/src/collections/_posts/2020-08-13-drivetrain-navigation.md new file mode 100644 index 0000000..c77b43b --- /dev/null +++ b/src/collections/_posts/2020-08-13-drivetrain-navigation.md @@ -0,0 +1,89 @@ +--- +layout: default +title: 'Notes from FRC: Autonomous point-to-point navigation' +description: The tale of some very curvy math +date: 2020-08-13 +enable_katex: true +--- + +This post is a continuation on my "Notes from FRC" series. If you haven't already, I recommend reading my post on [Converting joystick data to tank-drive outputs](@/blog/2020-08-03-Joystick-to-Voltage.md). Some concepts in this post were introduced there. Like last time, to see the production code behind this post, check [here](https://github.com/frc5024/lib5k/blob/ab90994b2a0c769abfdde9a834133725c3ce3a38/common_drive/src/main/java/io/github/frc5024/common_drive/DriveTrainBase.java) and [here](https://github.com/frc5024/lib5k/tree/master/purepursuit/src/main/java/io/github/frc5024/purepursuit/pathgen). + +At *Raider Robotics*, most of my work has been spent on these three subjects: + - Productivity infrastructure + - Developing our low-level library + - Writing the software that powers our past three robots' *DriveTrain*s + +When I joined the team, we had just started to design effective autonomous locomotion code. Although functional, our ability to manuever robots around the FRC field autonomously was very limited, and with very low precision. It has since been my goal to build a powerful software framework for precisely estimating our robot's real-world position at all times, and for giving anyone the tools to easily call a method, and have the robot to drive from point *A* to *B*. My goal with this post is to outline how this system actually works. But first, I need to explain some core concepts: + +**Poses**. At Raider Robotics, we use the following vector components to denote a robot's position and rotation on a 2D plane (the floor). We call this magic vector a *pose* : + +$$ +pose = \begin{bmatrix} x \\ y \\ \theta \end{bmatrix} +$$ + +With a robot sitting at $\big[\begin{smallmatrix}0 \\ 0 \\ 0\end{smallmatrix}\big]$, it would be facing positive in the $x$ axis. + +**Localization**. When navigating the real world, the first challenge is: knowing where the robot is. At Raider Robotics, we use an [Unscented Kalman Filter](https://en.wikipedia.org/wiki/Kalman_filter#Unscented_Kalman_filter) (UKF) that fuses high-accuracy encoder and gyroscope data with medium-accuracy VI-SLAM data fed off our robot's computer vision system. Our encoders are attached to the robot's tank track motor output shafts, counting the distance traveled by each track. Although this sounds extremely complicated, this algorithm can be boiled down to a simple (and low-accuracy) equation that originated from marine navigation called [Dead Reckoning](https://en.wikipedia.org/wiki/Dead_reckoning): + +$$ +\Delta P = \begin{bmatrix}(\Delta L - \Delta R) \cdot \sin(\theta\cdot\frac{\pi}{180}) \\ (\Delta L - \Delta R) \cdot \cos(\theta\cdot\frac{\pi}{180}) \\ \Delta \theta \end{bmatrix} +$$ + +The result of this equation, $\Delta P$, is then accumulated over time, into the robot's *pose*. $L$ and $R$ are the distance readings from the *left* and *right* tank tracks. + +With an understanding of the core concepts, lets say we have a tank-drive robot sitting at pose $A$, and we want to get it to pose $B$. + +$$ +A = \begin{bmatrix}0 \\ 0 \\ 0\end{bmatrix} +$$ + +$$ +B = \begin{bmatrix}0 \\ 1 \\ 90\end{bmatrix} +$$ + +This raises an interesting problem. Our *goal pose* is directly to the left of our *current pose*, and tanks cannot strafe (travel in the $y$ axis without turning). Luckily, to solve this problem we just need to know our error from the goal pose as a distance ($\Delta d$), and a heading ($\Delta\theta$): + +$$ +\Delta d = \sqrt{\Delta x^2 + \Delta y^2} +$$ + +$$ +\Delta\theta = \arctan(\Delta y, \Delta x) \cdot \frac{180}{\pi} +$$ + +Notice how a polar coordinate containing these values: $\big[\begin{smallmatrix}\Delta d \\ \Delta\theta\end{smallmatrix}\big]$ is very similar to our joystick input vector from the [previous post](@/blog/2020-08-03-Joystick-to-Voltage.md): $\big[\begin{smallmatrix}T \\ S\end{smallmatrix}\big]$. Converting our positional error into a polar coordinate makes the process of navigating to any point very simple. All we need to do is take the [Hadamard product](https://en.wikipedia.org/wiki/Hadamard_product_(matrices)) of the coordinate matrix with a gain matrix to make small adjustments to the output based on the physical characteristics of your robot, like the amount of voltage required to overcome static friction. This is a very simple P-gain controller. + +$$ +input = \begin{bmatrix}\Delta d \\ \Delta\theta\end{bmatrix}\circ\begin{bmatrix}K_t \\ K_s \end{bmatrix} +$$ + +This new input vector can now be fed directly into the code from the previous post, and as long as the $K_t$ and $K_s$ gains are tuned correctly, your robot will smoothly and efficiently navigate from pose $A$ to pose $B$ automatically. + +There are a few tweaks that can be made to this method that will further smooth out the robot's movement. Firstly, we can multiply $\Delta d$ by a restricted version of $\Delta\theta$. This will cause the robot to slow down any time it is too far off course. While it is slower, turns can be made faster, and more efficiently. This cuts down on the amount of time needed to face the goal pose in the first place. We can calculate this gain, $m$, as: + +$$ +m = \big(-1 * \frac{\min(abs(\Delta\theta), 90)}{90}\big) + 1 +$$ + +$m$ is now a scalar that falls in $-1 \leq m \leq 1$. Our calculation to determine a new "input" vector is now as follows: + +$$ +input = \begin{bmatrix}\Delta d \\ \Delta\theta\end{bmatrix}\circ\begin{bmatrix}K_t \\ K_s \end{bmatrix} \circ \begin{bmatrix}m \\ 1 \end{bmatrix} +$$ + +For even more controllability, Raider Robotics passes $\Delta d$ through a [PD](https://en.wikipedia.org/wiki/PID_controller#Selective_use_of_control_terms) controller, and $\Delta\theta$ through a [PI](https://en.wikipedia.org/wiki/PID_controller#PI_controller) controller before converting them to motor values... and that is it! With just a couple formulæ, we have a fully functional autonomous point-to-point locomotion system. + +For a real-world example of this method in use, check out 5024's robot (bottom right) and 1114's robot (bottom left). Both teams were running nearly the same implementation. We were both running autonomously for the first 15 seconds of the game: + + + +--- + +I hope someone will some day find this post helpful. Most papers about this topic went way over my head in 10th grade, or were over-complicated for the task. If you would like me to go further in depth on this topic, [contact me](/contact) and let me know. I will gladly help explain things, or write a new post further expanding on a topic. diff --git a/src/collections/_posts/2020-08-23-notetaking-with-latex.md b/src/collections/_posts/2020-08-23-notetaking-with-latex.md new file mode 100644 index 0000000..683ac47 --- /dev/null +++ b/src/collections/_posts/2020-08-23-notetaking-with-latex.md @@ -0,0 +1,57 @@ +--- +layout: default +title: Taking notes with Markdown and LaTeX +description: Using a lot of tech to replace a piece of paper +date: 2020-08-23 +extra: + excerpt: I have completely reworked my school notetaking system to use LaTeX. This post outlines how I did everything, and my new workflow. +redirect_from: + - /post/68df02l4/ + - /68df02l4/ +aliases: + - /blog/2020/08/23/notetaking-with-latex + - /blog/notetaking-with-latex +--- + +*You can view my public demo for this post [here](https://github.com/Ewpratten/school-notes-demo)* + +Recently, I have been on a bit of a mission to improve my school workflow with software. Over the past month, I have built a cleaner [student portal](https://github.com/Ewpratten/student_portal#unofficial-tvdsb-student-portal-webapp) for my school and [written a tool](https://github.com/Ewpratten/timeandplace-api#timeandplace-api--cli-application) for automating in-class attendance. Alongside working on these projects, I have also been refining my notetaking system for school. + +Since 9th grade, I have been taking notes in a private GitHub repository in markdown, and have been compiling them to HTML using a makefile for each course. While this system has worked ok, It has been far from perfect. Recently, I have been working very hard to give this system a much-needed upgrade. Here is the new tech stack: + + - The [Bazel buildsystem](https://bazel.build) + - [Markdown](https://en.wikipedia.org/wiki/Markdown) + - [LaTeX](https://en.wikipedia.org/wiki/LaTeX) + - [MathJax](https://www.mathjax.org/) + - [Beamer](https://ctan.org/pkg/beamer) + - [Tikz & PGF](https://ctan.org/pkg/pgf) + - [Pandoc](https://pandoc.org/) + - [Zathura](https://pwmt.org/projects/zathura/) + - [Starlark](https://docs.bazel.build/versions/master/skylark/language.html) + - [Github Actions](https://github.com/features/actions) CI + +The idea is that every course I take becomes its own Bazel package, with subpackages for things like assignments, papers, notes, and presentations. I can compile everything just by running the command `bazel build //:all`. All builds are cached using Bazel's build caching system, so when I run the command to compile my notes (I love saying that), I only end up compiling things that have changed since the last run. The setup for all of this is quite simple. All that is really needed is a Bazel workspace with the [`bazel_pandoc`](https://github.com/ProdriveTechnologies/bazel-pandoc) rules loaded (although I have opted to use some custom [genrules](https://docs.bazel.build/versions/master/be/general.html#genrule) instead). Using these rules, markdown files can be concatenated, and compiled into a PDF. I also use a modified version of the [Eisvogel](https://github.com/Wandmalfarbe/pandoc-latex-template) Pandoc template to make all my documents look a little neater. + +In terms of workflow, I write all my notes as markdown files with [embedded LaTeX](https://pandoc.org/MANUAL.html#math) for any equations and charts I may need. All of this is done inside of VSCode, and I have a custom `tasks.json` file that lets me press Ctrl + Shift + b to re-compile whatever I am currently working on. I also keep Zathura open in a window to the side for a nearly-live preview system. + + + + +*A screenshot of my workspace* + +Now, the question came up of *"how do you easily distribute notes and assignments to classmates and professors?"*. That question got me stuck for a while, but here is the system I have come up with: + + 1. I write an assignment + 2. I push it to the private GitHub repository + 3. GitHub Actions picks up the deployment with a custom build script + 4. Every document is built into a PDF, and packaged with a directory listing generated by [`tree -H`](http://mama.indstate.edu/users/ice/tree/tree.1.html#XML/JSON/HTML%20OPTIONS) + 5. Everything is pushed to a subdomain on my website via GitHub pages + 6. I can share documents via URL to anyone + +This is almost entirely accomplished by a shell script and a custom CI script. + + + +--- + +If you have any questions about this system, want me to write another post about it, or would like me to walk you through setting up a notes workspace of your own, [contact me](/contact) diff --git a/src/collections/_posts/2020-09-03-bazel-and-avr.md b/src/collections/_posts/2020-09-03-bazel-and-avr.md new file mode 100644 index 0000000..771eebb --- /dev/null +++ b/src/collections/_posts/2020-09-03-bazel-and-avr.md @@ -0,0 +1,250 @@ +--- +layout: default +title: Compiling AVR-C code with a modern build system +description: Bringing Bazel to 8-bit microcontrollers +date: 2020-09-03 +tags: +- avr +- embedded +- bazel +- walkthrough +extra: + excerpt: In this post, I cover my process of combining low level programming with + a very high level buildsystem. +redirect_from: +- /post/68dk02l4/ +- /68dk02l4/ +aliases: +- /blog/2020/09/03/bazel-and-avr +- /blog/bazel-and-avr +--- + +*The GitHub repository for everything in this post can be found [here](https://github.com/Ewpratten/avr-for-bazel-demo)* + +When writing software for an Arduino, or any other [AVR](https://en.wikipedia.org/wiki/AVR_microcontrollers)-based device, there are generally three main options. You can use the [Arduino IDE](https://www.arduino.cc/en/main/software) with [arduino-cli](https://github.com/arduino/arduino-cli), which is in my opinion, a clunky system that is great for high levels of abstraction and teaching people how to program, but lacks any kind of easy customization I am interested in. If you are looking for something more advanced (and works in your favorite IDE), you might look at [PlatformIO](https://platformio.org/). Finally, you can just program without any Hardware Abstraction Library at all, and use [avr-libc](https://www.nongnu.org/avr-libc/) along with [avr-gcc](https://www.microchip.com/mplab/avr-support/avr-and-arm-toolchains-c-compilers) and [avrdude](https://www.nongnu.org/avrdude/). + +This final option is my favorite by far, as it both forces me to think about how the system I am building is actually working "behind the scenes", and lets me do everything exactly the way I want. Unfortunately, when working directly with the AVR system libraries, the only buildsystem / tool that is available (without a lot of extra work) is [Make](https://en.wikipedia.org/wiki/Make_(software)). As somebody who spends 90% of his time working with higher-level buildsystems like [Gradle](https://gradle.org/) and [Bazel](https://bazel.build), I don't really like needing to deal with Makefiles, and manually handle dependency loading. This got me thinking. I have spent a lot of time working in Bazel, and cross-compiling for the armv7l platform via the [FRC Toolchain](https://launchpad.net/~wpilib/+archive/ubuntu/toolchain/). How hard can it be to add AVR Toolchain support to Bazel? + +*The answer: Its pretty easy.* + +The Bazel buildsystem allows users to define custom toolchains via the [toolchain](https://docs.bazel.build/versions/master/toolchains.html) rule. I am going to assume you have a decent understanding of the [Starlark](https://docs.bazel.build/versions/master/skylark/language.html) DSL, or at least Python3 (which Starlark is syntactically based on). To get started setting up a Bazel toolchain, I create empty `WORKSPACE` and `BUILD` files, along with a new bazel package named `toolchain` that has a bazel file inside for the toolchain settings, a `.bazelrc` file, and a package to store my test program. + +``` +/project + | + +-.bazelrc + +-BUILD + +-example + | | + | +-BUILD + | +-main.cc + +-toolchain + | | + | +-BUILD + | +-avr.bzl + +-WORKSPACE +``` + +I only learned about this recently, but you can use a `.bazelrc` file to define constant arguments to be passed to the buildsystem per-project. For this project, I am adding the following arguments to the config file to define which toolchain to use for which target: + +```sh +# .bazelrc + +# Use our custom-configured c++ toolchain. +build:avr_config --crosstool_top=//toolchain:avr_suite +build:avr_config --cpu=avr + +# Use the default Bazel C++ toolchain to build the tools used during the +# build. +build:avr_config --host_crosstool_top=@bazel_tools//tools/cpp:toolchain +``` + +This config will default all builds to use a custom toolchain named `avr_suite`, and compile to target the `avr` CPU architecture. But, the final line will make sure to use the host's toolchain for compiling tools needed for Bazel itself (since we can't run AVR code on the host machine). With this, we now have everything needed to tell Bazel what to use when building, but we have not actually defined the toolcahin in the first place. This step comes in two parts. First, we need to define a toolchain implementation (this happens in `avr.bzl`). This implementation will define things like, where to find every tool on the host, which libc version to use, and what types of tools are provided by avr-gcc in the first place. We can start out by adding some `load` statements to the file to tell Bazel what functions we need to use. + +```python +# toolchain/avr.bzl + +load("@bazel_tools//tools/build_defs/cc:action_names.bzl", "ACTION_NAMES") +load( + "@bazel_tools//tools/cpp:cc_toolchain_config_lib.bzl", + "feature", + "flag_group", + "flag_set", + "tool_path", +) +``` + +Once this is done, we need to define everything that this toolchain implementation can do. In this case avr-gcc can link executables, link dynamic libraries, and link a "nodeps" dynamic library. + +```python +# ... + +all_link_actions = [ + ACTION_NAMES.cpp_link_executable, + ACTION_NAMES.cpp_link_dynamic_library, + ACTION_NAMES.cpp_link_nodeps_dynamic_library, +] +``` + +We also need to tell Bazel where to find every tool. This may vary platform-to-platform, but with a standard avr-gcc install on Linux, the following should work just fine. Experienced Bazel users may wish to make use of Bazel's [`config_setting` and `select`](https://docs.bazel.build/versions/master/configurable-attributes.html) rules to allow the buildsystem to run on any type of host via a CLI flag. + +```python +# ... + +tool_paths = [ + tool_path( + name = "gcc", + path = "/usr/bin/avr-gcc", + ), + tool_path( + name = "ld", + path = "/usr/bin/avr-ld", + ), + tool_path( + name = "ar", + path = "/usr/bin/avr-ar", + ), + tool_path( + name = "cpp", + path = "/usr/bin/avr-g++", + ), + tool_path( + name = "gcov", + path = "/usr/bin/avr-gcov", + ), + tool_path( + name = "nm", + path = "/usr/bin/avr-nm", + ), + tool_path( + name = "objdump", + path = "/usr/bin/avr-objdump", + ), + tool_path( + name = "strip", + path = "/usr/bin/avr-strip", + ), +] +``` + +Finally, we need to define the actual avr-toolchain implementation. This can be done via a simple function, and the creation of a new custom rule: + +```python +# ... + +def _avr_impl(ctx): + features = [ + feature( + name = "default_linker_flags", + enabled = True, + flag_sets = [ + flag_set( + actions = all_link_actions, + flag_groups = ([ + flag_group( + flags = [ + "-lstdc++", + ], + ), + ]), + ), + ], + ), + ] + + return cc_common.create_cc_toolchain_config_info( + ctx = ctx, + toolchain_identifier = "avr-toolchain", + host_system_name = "local", + target_system_name = "local", + target_cpu = "avr", + target_libc = "unknown", + compiler = "avr-g++", + abi_version = "unknown", + abi_libc_version = "unknown", + tool_paths = tool_paths, + cxx_builtin_include_directories = [ + "/usr/lib/avr/include", + "/usr/lib/gcc/avr/5.4.0/include" + ], + ) + +cc_toolchain_config = rule( + attrs = {}, + provides = [CcToolchainConfigInfo], + implementation = _avr_impl, +) +``` + +The `cxx_builtin_include_directories` argument is very important. This tells the compiler where to find the libc headers. **Both** paths are required, as the headers are split between two directories on Linux for some reason. We are now done with the `avr.bzl` file, and can add the following to the `toolchain` package's `BUILD` file to register our custom toolcahin as an official CC toolchain for Bazel to use: + +```python +# toolchain/BUILD + +load("@rules_cc//cc:defs.bzl", "cc_toolchain", "cc_toolchain_suite") +load(":avr.bzl", "cc_toolchain_config") + +cc_toolchain_config(name = "avr_toolchain_config") + +cc_toolchain_suite( + name = "avr_suite", + toolchains = { + "avr": ":avr_toolchain", + }, +) + +filegroup(name = "empty") + +cc_toolchain( + name = "avr_toolchain", + all_files = ":empty", + compiler_files = ":empty", + dwp_files = ":empty", + linker_files = ":empty", + objcopy_files = ":empty", + strip_files = ":empty", + supports_param_files = 0, + toolchain_config = ":avr_toolchain_config", + toolchain_identifier = "avr-toolchain", +) +``` + +Thats it. Now, if we wanted to compile a simple blink program in AVR-C, we can add the following to `main.cc`: + +```cpp +#ifndef F_CPU +#define F_CPU 16000000UL +#endif + +#includeThe theme for Ludum Dare 46 is...
— Ludum Dare (@ludumdare) April 18, 2020
Keep it alivehttps://t.co/APmeEhwjEp #LDJAM pic.twitter.com/bzNYi2zlDG