Attempting to fix bad file case
Some checks failed
Deploy ewpratten.com / Deploy to Production (push) Failing after 1m3s
Some checks failed
Deploy ewpratten.com / Deploy to Production (push) Failing after 1m3s
This commit is contained in:
parent
a4913a8f6c
commit
3539ab62df
82
src/collections/_posts/2018-06-27-becomeranter.md
Normal file
82
src/collections/_posts/2018-06-27-becomeranter.md
Normal file
@ -0,0 +1,82 @@
|
||||
---
|
||||
layout: default
|
||||
title: Using a python script to create devRant posts based on the style and content
|
||||
of another user
|
||||
description: if/else ++
|
||||
date: 2018-06-27
|
||||
tags:
|
||||
- project
|
||||
- python
|
||||
- walkthrough
|
||||
aliases:
|
||||
- /blog/2018/06/27/becomeranter
|
||||
- /blog/becomeranter
|
||||
---
|
||||
|
||||
Ok... The title is slightly wrong. There are actually 2 scripts.. Sorry about that.
|
||||
|
||||
This is a guide on installing and using the [BecomeRanter](https://github.com/Ewpratten/BecomeRanter) script.
|
||||
|
||||
## Getting dependancies
|
||||
|
||||
The scripts use Google's tensorflow library to do its "magic". So first, we should install Tensorflow's dependencies.
|
||||
|
||||
```bash
|
||||
sudo apt install python3 python3-pip # change this command to fit your distro
|
||||
pip3 install numpy
|
||||
```
|
||||
|
||||
Then install Tensorflow
|
||||
|
||||
```bash
|
||||
pip3 install tensorflow # for cpu processing
|
||||
pip3 install tensorflow-gpu # for gpu processing
|
||||
```
|
||||
|
||||
Next up, install the rest of the stuff:
|
||||
|
||||
```bash
|
||||
pip3 install textgenrnn pandas keras
|
||||
```
|
||||
|
||||
## Clone the repo
|
||||
|
||||
This is pretty simple. just make sure you have `git` installed and run
|
||||
|
||||
```bash
|
||||
git clone https://github.com/Ewpratten/BecomeRanter.git
|
||||
```
|
||||
|
||||
## Generate some rants with a .hdf5 file
|
||||
|
||||
As of the time of writing this, I have pre-generated some files for the two most popular ranters. These files can be found in `BecomeRanter/Checkpoint\ Files`.
|
||||
|
||||
Higher epoch numbers mean that they have had more time to train. The files with lower numbers are generally funnier.
|
||||
|
||||
To change the .hdf5 file you would like to use, open the file called `createsomerants.py` and change the variable called `input_file` to the path of your file. By default, the script generates from the `Linuxxx-epoch-90.hdf5` file.
|
||||
|
||||
Next, save that file and run the following in your terminal:
|
||||
|
||||
```bash
|
||||
python3 createsomerants.py >> output.txt
|
||||
```
|
||||
|
||||
It will not print the results out to the screen and put them in the file instead.
|
||||
|
||||
To stop the script, press <kbd>CTRL</kbd> + <kbd>C</kbd>
|
||||
|
||||
## Create your own .hdf5 file
|
||||
|
||||
If you want to make your own hdf5 file, you just have to use the other script in the repo.
|
||||
|
||||
By default, you can just put all your text to train on in the `input.txt` file.
|
||||
|
||||
If you want to use a different file, or change the number of epochs, those variables can be found at the top of the `createhfd5frominput.py` file.
|
||||
|
||||
To start training, run:
|
||||
|
||||
```bash
|
||||
python3 createhfd5frominput.py
|
||||
```
|
||||
|
||||
A new hdf5 file will be generated in the same folder as the script
|
23
src/collections/_posts/2019-04-30-frc-languages.md
Normal file
23
src/collections/_posts/2019-04-30-frc-languages.md
Normal file
@ -0,0 +1,23 @@
|
||||
---
|
||||
layout: default
|
||||
title: The language hunt
|
||||
date: 2019-04-30
|
||||
aliases:
|
||||
- /blog/frc-languages
|
||||
---
|
||||
|
||||
Our programming team is looking to switch languages in the 2020 season. Here is the what, why, and how.
|
||||
|
||||
## Our history
|
||||
We started out as a java team back in 2014 because java was (and still is) the language being taught in our programming classes. Honestly, our code sucked, as many rookie team's do. There where no fancy features, or sensor-assisted autonomous. Direct input into talons was our way to roll.
|
||||
|
||||
A few years later, we had a change in team organization and switched to C++. Up until the 2019 / 2020 season, this was our language and we where getting pretty good at using it.
|
||||
|
||||
## The Problem
|
||||
We, as a team are looking to bring our programming and robots to the next level in 2020. Because of this, we ran into a problem. While C++ is an amazing language for embedded and robotics programming, some of it's "features" where starting to act as a bottleneck to our design. Less time was being spent on polishing our new vision system or autonomous climb, and more on that crazy linker error that came out of nowhere.
|
||||
|
||||
It's time for a change, but what do we change to?
|
||||
|
||||
## Part 2
|
||||
The followup can be found [HERE](@/blog/2019-06-24-LanguageHunt2.md).
|
||||
|
52
src/collections/_posts/2019-06-12-styiling-github.md
Normal file
52
src/collections/_posts/2019-06-12-styiling-github.md
Normal file
@ -0,0 +1,52 @@
|
||||
---
|
||||
layout: default
|
||||
title: GitHub's CSS is boring. So I refreshed the design
|
||||
date: 2019-06-12
|
||||
tags:
|
||||
- project
|
||||
- github
|
||||
aliases:
|
||||
- /blog/2019/06/12/styiling-github
|
||||
- /blog/styiling-github
|
||||
---
|
||||
|
||||
I have been using GitHub since 2017, and have been getting tired of GitHub's theme. I didn't need a huge change, just a small refresh. So, to solve this, I whipped out [Stylus](https://addons.mozilla.org/en-CA/firefox/addon/styl-us/) and made a nice little CSS file for it.
|
||||
|
||||
## The CSS
|
||||
Here is the CSS. Feel free to play with it.
|
||||
|
||||
```css
|
||||
@-moz-document url-prefix("https://github.com/") {
|
||||
.Header {
|
||||
background-color: #1a3652;
|
||||
}
|
||||
|
||||
.repohead.experiment-repo-nav {
|
||||
background-color: #fff;
|
||||
}
|
||||
.reponav-item.selected {
|
||||
border-color: #fff #fff #4a79a8;
|
||||
}
|
||||
|
||||
.btn.hover,
|
||||
.btn:hover,
|
||||
.btn,
|
||||
.btn {
|
||||
background-color: #fafafa;
|
||||
background-image: linear-gradient(-180deg, #fafafa, #fafafa 90%);
|
||||
}
|
||||
|
||||
.btn-primary.hover,
|
||||
.btn-primary:hover,
|
||||
.btn-primary,
|
||||
.btn-primary {
|
||||
background-color: #1aaa55;
|
||||
background-image: linear-gradient(-180deg, #1aaa55, #1aaa55 90%);
|
||||
}
|
||||
|
||||
.overall-summary {}
|
||||
}
|
||||
```
|
||||
|
||||
## Use it yourself
|
||||
I put this theme on userstyles.org. You can download and install it by going to [my userstyles page](https://userstyles.org/styles/172679/ewpratten-s-githubtheme).
|
67
src/collections/_posts/2019-06-16-graphing-w2a.md
Normal file
67
src/collections/_posts/2019-06-16-graphing-w2a.md
Normal file
@ -0,0 +1,67 @@
|
||||
---
|
||||
layout: default
|
||||
title: Graphing the relation between wheels and awards for FRC
|
||||
description: AKA. Why programmer + reddit + matplotlib is a bad idea.
|
||||
date: 2019-06-16
|
||||
tags:
|
||||
- frc
|
||||
- data-analysis
|
||||
aliases:
|
||||
- /blog/2019/06/16/graphing-w2a
|
||||
- /blog/graphing-w2a
|
||||
---
|
||||
|
||||
I was scrolling through reddit the other day, and came across [this great post](https://www.reddit.com/r/FRC/comments/byzv5q/i_know_what_im_doing/) by u/[MasterQuacks](https://www.reddit.com/user/MasterQuacks/).
|
||||
|
||||

|
||||
|
||||
I thought to myself "ha. Thats funny", and moved on. But that thought had stuck with me.
|
||||
|
||||
So here I am, bored on a sunday afternoon, staring at the matplotlib documentation.
|
||||
|
||||
## My creation
|
||||
In only a few lines of python, I have a program that will (badly) graph the number of awards per wheel for any team, or set of teams.
|
||||
|
||||
As always, feel free to tinker with the code. This one is not published anywhere, so if you want to share it, I would appreciate a mention.
|
||||
|
||||
```python
|
||||
import requests
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
class Team:
|
||||
def __init__(self, id, wheels):
|
||||
self.id = id
|
||||
self.wheels = wheels * 2
|
||||
|
||||
### CONFIG ###
|
||||
|
||||
teams = [Team(5024, 3), Team(254, 4), Team(1114, 3), Team(5406, 3), Team(2056, 4)]
|
||||
year = 2019
|
||||
|
||||
##############
|
||||
|
||||
|
||||
for i, team in enumerate(teams):
|
||||
award_data = requests.get("https://www.thebluealliance.com/api/v3/team/frc" + str(team.id) + "/awards/" + str(year), params={"X-TBA-Auth-Key": "mz0VWTNtXTDV8NNOz3dYg9fHOZw8UYek270gynLQ4v9veaaUJEPvJFCZRmte7AUN"}).json()
|
||||
|
||||
awards_count = len(award_data)
|
||||
|
||||
team.w2a = awards_count / team.wheels
|
||||
print(team.id, team.w2a)
|
||||
|
||||
plt.bar(i + 1, team.w2a, tick_label=str(team.id))
|
||||
|
||||
# Plot
|
||||
x_lables = [team.id for team in teams]
|
||||
# plt.set_xticklabels(x_lables)
|
||||
|
||||
with plt.xkcd():
|
||||
plt.title('Awards per wheel')
|
||||
plt.show()
|
||||
|
||||
```
|
||||
|
||||
## The result
|
||||
Here is the resulting image. From left, to right: 5024, 254, 1114, 5406, 2056
|
||||
|
||||

|
55
src/collections/_posts/2019-06-21-robot-experiences.md
Normal file
55
src/collections/_posts/2019-06-21-robot-experiences.md
Normal file
@ -0,0 +1,55 @@
|
||||
---
|
||||
layout: default
|
||||
title: What I have learned from 2 years of FRC programming
|
||||
description: Robots are pretty cool
|
||||
date: 2019-06-21
|
||||
tags:
|
||||
- frc
|
||||
aliases:
|
||||
- /blog/2019/06/21/robot-experiences
|
||||
- /blog/robot-experiences
|
||||
---
|
||||
|
||||
Over the past two years (2018 / 2019), I have been a member of my school's [FRC](https://www.firstinspires.org/robotics/frc) team, [Raider Robotics](https://frc5024.github.io). Specifically, a programmer.
|
||||
|
||||
## My roles
|
||||
In my first year, I joined the team as a programmer and had a fun time learning about embedded programming and development with hardware. Then, in my second year, I was promoted to programming co-lead along with [@slownie](https://github.com/slownie). I much preferred my second season because I had a better understanding of the technology I was working with, and we got to play with some cool tools throughout the season.
|
||||
|
||||
## What I have learned
|
||||
Starting with our 2018 season, PowerUP. We learned early on that there is a practical limit to the number of programmers that 5024 can handle. That year, we had too many, and our situation was not helped by the fact that some members preferred scrolling through Instagram over writing code. This issue was almost entirely fixed by the introduction of a mandatory skill exam at the start of the season. Sam and I did not really care about the scores of the exam because, from reading the results, we could see who was actually motivated to join the team. Thanks to the test, we entered the season with seven excited programmers.
|
||||
|
||||
During the PowerUP season, I also learned the importance of student involvement. Most of the code from the season was written by mentors with the students just watching on a projecter. After talking with other team members, I learned that none of them through this was a good method of teaching, and many felt powerless. In the 2019 season, I completely reversed this. All students worked together on the codebase, and the mentors worked on other projects and provided input where needed.
|
||||
|
||||
### Version Control
|
||||
During the 2018 season, code was shared around by USB. This lead to crazy conflicts, confusion over what was running on the robot, and general frustration during competitions. In 2019, I moved the team over to a [GitHub](https://github.com) organization account and sent an email to support to get us unlimited private repos (thanks GitHub!). For the team members that where not comfortable in the terminal, I set them up with [GitKracken PRO](https://www.gitkraken.com/) accounts, and they enjoyed using the program. The rest of us stuck with GIT cli, or various plugins for VSCode.
|
||||
|
||||
### Alpha test
|
||||
I got our team on board with the 2019 toolchain alpha test the week it was released in order to get everyone used to the new tools before the season (and help find bugs for the WPILib team). The new buildsystem, Gradle, worked great and I could even run it on the chromebook I was using for development at the time! To further assist the team, I set up a CI pipeline for automatic testing and code reviews of Pull Requests, and a doxygen + GitHub pages CD pipeline for our new documentation webpage.
|
||||
|
||||
### Webdocs
|
||||
A significant amount of my time was spent answering repetitive questions from the team. I enjoy helping people out, but explaining the same things over and over was starting to frustrate me. This was caused by a lack of documentation or bits of documentation spread over multiple websites. To solve this problem, I started the [Webdocs Page](https://frc5024.github.io/webdocs/#/). This website is designed to house a mix of team-specific notes, guides, low-level documentation, and documentation from all FRC vendors. This site was published after the season, so I will find out how usefull it really is during the 2020 season.
|
||||
|
||||
### Command base
|
||||
"Command based programming is great. But..." is probably the best way to describe my suggested changes for 2020.
|
||||
|
||||
I have been learning from other teams, and from mentors about better ways to control our robot. During the offseason, I am playing with new ways to write robot code. Here are some of my changes:
|
||||
- Use a custom replacement for WPILib's [Subsystem](https://first.wpi.edu/FRC/roborio/release/docs/java/edu/wpi/first/wpilibj/command/Subsystem.html) that buffers it's inputs and outputs
|
||||
- This reduces load on our CAN and Ethernet networks
|
||||
- Offload all camera and vision work to a Raspberry PI
|
||||
- Every subsystem must push telemetry data to NetworkTables for easy debugging and detailed logs
|
||||
- Use a custom logging system that buffers writes to stdout. This reduces network strain
|
||||
|
||||
I am working on many other changes over on the [MiniBot](https://github.com/frc5024/MiniBot) codebase.
|
||||
|
||||
## My plans for 2020
|
||||
I have been re-selected to be the sole lead of the 5024 programming team for 2020. Here are my goals:
|
||||
- Switch the team from C++ to Java
|
||||
- Easier for prototyping
|
||||
- Better memory management for high-level programmers
|
||||
- Better documentation from vendors
|
||||
- It is taught in our school's compsci classes
|
||||
- Remove the skills exam in favour of weekly homework for the first 8 weeks
|
||||
- Provide writeups of lessons
|
||||
- Have mentors do "guest presentations"
|
||||
- Dedicate a day to robot driving lessons
|
||||
- Use a custom library with wrappers and tools built by me to provide easy interfaces for new programmers
|
22
src/collections/_posts/2019-06-24-languagehunt2.md
Normal file
22
src/collections/_posts/2019-06-24-languagehunt2.md
Normal file
@ -0,0 +1,22 @@
|
||||
---
|
||||
layout: default
|
||||
title: 'The language hunt: Part 2'
|
||||
description: A quick followup
|
||||
date: 2019-06-24
|
||||
tags:
|
||||
- frc
|
||||
aliases:
|
||||
- /blog/2019/06/24/languagehunt2
|
||||
- /blog/languagehunt2
|
||||
---
|
||||
|
||||
This is a very short post, just to explain the result of [The language Hunt](@/blog/2019-04-30-FRC-Languages.md).
|
||||
|
||||
## Our choice
|
||||
For our upcoming 2020 season and for the forseeable future, we have chosen Java as our programming language for direct hardware interfacing, and Python for networking, vision, and other smaller tasks.
|
||||
|
||||
## What does this mean for the team?
|
||||
Not too much. Aside from learning new syntax, tools, and no longer worrying about linker errors, Java and C++ have no real difference. Most of the reason Java was chosen was based on support instead of functionality. Java is much better supported by FIRST, WPILib, and other vendors. Java is also taught in the school 5024 is based from. For a more detailed explanation of the benefits of each language, take a look at Chief Delphi. There are plenty of posts there explaining the choices of many teams and their reasoning.
|
||||
|
||||
## Side note
|
||||
I am experimenting with various post formats (This being a short post). Let me know which you prefer via the social platform of your choice.
|
140
src/collections/_posts/2019-06-26-bashsmash.md
Normal file
140
src/collections/_posts/2019-06-26-bashsmash.md
Normal file
@ -0,0 +1,140 @@
|
||||
---
|
||||
layout: default
|
||||
title: BashSmash
|
||||
description: A tool for driving people crazy
|
||||
date: 2019-06-26
|
||||
tags:
|
||||
- project
|
||||
- bash
|
||||
aliases:
|
||||
- /blog/2019/06/26/bashsmash
|
||||
- /blog/bashsmash
|
||||
---
|
||||
|
||||
I was watching this great [Liveoverflow video](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwiOhNze_4fjAhUiB50JHR12D8AQwqsBMAB6BAgJEAQ&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D6D1LnMj0Yt0&usg=AOvVaw2nOgft0SoPZujc9js9Vxhx) yesterday, and really liked the idea of building escape sequences with strings. So, I built a new tool, [BashSmash](https://pypi.org/project/bashsmash/).
|
||||
|
||||
## The goal
|
||||
The goal of BashSmash is very similar to that described in Liveoverflow's video. Do anything in bash without using any letters or numbers except `n` and `f` (he used `i` instead of `f`). This can both bypass shell injection filters, and generally mess with people.
|
||||
|
||||
Saying "Hey, you should run:"
|
||||
```bash
|
||||
__() {/???/???/???n?f ${#};}; $(/???/???/???n?f $(/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" "" ``__ "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" "" ``__ "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" ``__ "" "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" ``__ "" "" "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" "" "" "" ``__ `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" "" ``__ "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" ``__ "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" "" "" "" ``__ `";/???/???/???n?f "\\\\`__ "" "" "" "" "" ``__ "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" "" ``__ "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" ``__ "" "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" "" "" "" ``__ `";/???/???/???n?f "\\\\`__ "" "" "" "" "" ``__ "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" "" "" "" "" ``__ "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" ``__ "" "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" ``__ "" "" "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" "" "" "" "" ``__ "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" "" ``__ `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" "" ``__ "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" ``__ "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" "" ``__ "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" ``__ "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" "" ``__ "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" "" ``__ "" "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" ``__ "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" "" "" "" "" ``__ "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" "" ``__ "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" ``__ "" "" "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" ``__ "" "" "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" "" ``__ "" "" "" "" `";/???/???/???n?f "\\\\`__ "" "" "" "" ``__ `";/???/???/???n?f "\\\\`__ "" "" "" "" "" ``__ "" "" "" "" "" "" "" `";););
|
||||
```
|
||||
|
||||
Instead of:
|
||||
```bash
|
||||
sudo rm -rf --no-preserve--root /
|
||||
```
|
||||
|
||||
Can usually get you much farther with your goal of world domination.
|
||||
|
||||
## How does this work?
|
||||
BashSmash abuses bash wildcards, octal escape codes, and a large number of backslashes to obfuscate any valid shell script.
|
||||
|
||||
Firstly, it is important to know that `printf` will gladly convert any octal to a string, and bash's eval (`$()`) function will gladly run any string as a bash script. (See where this is going?)
|
||||
|
||||
Because of these tools, we know that the following is possible:
|
||||
```bash
|
||||
# Printf-ing a string will print the string
|
||||
printf "hello" # This will return hello
|
||||
|
||||
# Printf-ing a sequence of octal escapes will also print a string
|
||||
printf "\150\145\154\154\157" # This will also return hello
|
||||
|
||||
# Eval-ing a printf of an octal escape sequence will build a string, then run it in bash
|
||||
$(printf "\150\145\154\154\157") # This will warn that "hello" is not a valid command
|
||||
```
|
||||
|
||||
This has some issues. You may have noticed that letters are required to spell `printf`, and numbers are needed for the octal escapes. Let's start by fixing the letters problem.
|
||||
|
||||
Bash allows wildcards. You may have run something like `cp ./foo/* ./bar` before. This uses the wildcard `*`. The `*` wildcard will be auto-evaluated to expand into a list of all files in it's place.
|
||||
```bash
|
||||
# Let's assume that ./foo contains the following files:
|
||||
# john.txt
|
||||
# carl.txt
|
||||
|
||||
# Running the following:
|
||||
cat ./foo/*
|
||||
|
||||
# Will automatically expand to:
|
||||
cat ./foo/john.txt ./foo/carl.txt
|
||||
|
||||
# Now, lets assume that ./baz contains a single file:
|
||||
# KillHumans.sh
|
||||
|
||||
# Running:
|
||||
./baz/*
|
||||
|
||||
# Will execute KillHumans.sh
|
||||
```
|
||||
|
||||
Neat, Right? To take this a step further, you can use the second wildcard, `?`, to specify the number of characters you want to look for. Running `./baz/?` will not run `KillHumans.sh` because `KillHumans.sh` is not 1 char long. But `./baz/?????????????` will. This is messy, but it works.
|
||||
|
||||
Now, back to our problem with `printf`. `printf` is located in `/usr/bin/printf` on all *nix systems. This is handy as, firstly, this can be wildcarded, and secondly, the path contains 2 `n`'s and an `f` (the two letters we are allowed to use). So, instead of calling `printf`, we can call `/???/??n/???n?f`.
|
||||
```bash
|
||||
# Now, we can call:
|
||||
/???/??n/???n?f "\150\145\154\154\157"
|
||||
|
||||
# To print "hello". Or:
|
||||
$(/???/??n/???n?f "\150\145\154\154\157")
|
||||
|
||||
# To run "hello" as a program (still gives an error)
|
||||
```
|
||||
|
||||
Now, our problem with letters is solved, but we are still using numbers.
|
||||
|
||||
Bash allows anyone to define functions. These functions can take arguments and call other programs. So, what if we have a function that can take any number of arguments, and return the number of arguments as a number? This will be helpful because an empty argument can be added with `""` (not a number or letter), and this will replace the need for numbers in our code. On a side note, bash allows `__` as a function name, so that's cool.
|
||||
|
||||
```bash
|
||||
# Our function needs to do the following:
|
||||
# - Take any number of arguments
|
||||
# - Turn the number to a string
|
||||
# - Print the string so it can be evaluated back to a number with $()
|
||||
|
||||
# First, we start with an empty function, named __ (two underscores)
|
||||
__() {};
|
||||
|
||||
# Easy. Next, we use a built-in feature of bash to count the number of arguments passed
|
||||
__() { ${#} };
|
||||
|
||||
# With the ${#} feature in bash, giving this function 3 arguments will return a 3
|
||||
# Next, we need to print this number to stdout
|
||||
# This can be done with printf
|
||||
# We still do not want to use any letters or numbers, so we must use our string of wildcards
|
||||
/???/??n/???n?f
|
||||
|
||||
# So, we just plug this into our function
|
||||
__() {/???/??n/???n?f ${#}};
|
||||
|
||||
# Now, calling our function with three arguments
|
||||
__ "" "" ""
|
||||
# Will print:
|
||||
3
|
||||
```
|
||||
|
||||
Let's put this together. First, we must tell bash that our `__` function exists.
|
||||
``` bash
|
||||
# We do this by starting our new script with:
|
||||
__() {/???/??n/???n?f ${#}};
|
||||
|
||||
# Next, an eval to actually run our constructed string. Together it now looks like this:
|
||||
__() {/???/??n/???n?f ${#}); $(/???/??n/???n?f )
|
||||
|
||||
# Now, we construct a string using the __ function over and over again. "echo hello" looks like:
|
||||
__() {/???/???/???n?f ${#};}; $(/???/???/???n?f $(/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" ``__ "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" ``__ "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" ``__ `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" ``__ "" "" "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" "" "" "" ``__ `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" ``__ `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" ``__ "" "" "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" ``__ "" "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" ``__ "" "" "" "" `";/???/???/???n?f "\\\\`__ "" ``__ "" "" "" "" "" ``__ "" "" "" "" "" "" "" `";););
|
||||
```
|
||||
|
||||
Thats it! You do not actually have to worry about this, because BashSmash does it all for you automatically.
|
||||
|
||||
## How do I use the script?
|
||||
To use BashSmash, simply make sure both `python3.7` and `python3-pip` are installed on your computer, then run:
|
||||
```
|
||||
pip3 install bashsmash
|
||||
```
|
||||
|
||||
For more info, see the [PYPI Page](https://pypi.org/project/bashsmash/).
|
||||
|
||||
## Why do you have a desire to break things with python
|
||||
Because it is fun. Give it a try!
|
||||
|
||||
I will have a post here at some point about the weird things I do in my python code and why I do them.
|
50
src/collections/_posts/2019-06-27-pwnlink.md
Normal file
50
src/collections/_posts/2019-06-27-pwnlink.md
Normal file
@ -0,0 +1,50 @@
|
||||
---
|
||||
layout: default
|
||||
title: I had some fun with a router
|
||||
description: cleartext passwords + external management = death wish
|
||||
date: 2019-06-27
|
||||
tags:
|
||||
- project
|
||||
- cybersecurity
|
||||
aliases:
|
||||
- /blog/2019/06/27/pwnlink
|
||||
- /blog/pwnlink
|
||||
---
|
||||
|
||||
I was playing around with some D-link routers today and remembered an [ExploitDB Entry](https://www.exploit-db.com/exploits/33520) I read a while ago. Many D-link routers have a great feature that allows remote management and configuration queries. Interestingly, this cannot be disabled, and one of the pages contains a cleartext version of the admin password (yay!).
|
||||
|
||||
## How to get yourself an admin password
|
||||
On any supported router, make an HTTP request to `http://your.router.ip.addr/tools_admin.asp/`. This will return a pretty large XML file containing information about your router's hardware and configuration.
|
||||
|
||||
Notice the fact that you did not have to log in. This is due to the fact that this file seems to be used by a remote management service of some sort.
|
||||
|
||||
The important thing to note here is that, when parsed with the regex pattern: `name="user_password_tmp" value="(.*)">`, you get a single string. This string is the admin password of the device.
|
||||
|
||||
## Supported routers
|
||||
This is supported by many D-link routers. The ones I know about are:
|
||||
- DIR-835
|
||||
- DIR-855L
|
||||
- DGL-5500
|
||||
|
||||
Some routers have this XML file, but it is restricted... By a user without a password. These are:
|
||||
- DHP-1565
|
||||
- DIR-652
|
||||
|
||||
## PWNlink
|
||||
Like everything I play with, I made a script to do this all for me (and spent a large amount of time adding colours to the text).
|
||||
|
||||
My script is called PWNlink (PWN + D-link), It automatically finds a router on your network by looking for a specific DNS entry created by many D-link routers, then checking your gateway. Next, PWNlink reads you router's `hnap1` config to find it's model number. If supported, the script will read and parse the appropriate configs to give you the admin credentials for your router.
|
||||
|
||||
PWNlink can be installed on any *nix computer that has both `python3.7` and `python3-pip` installed. To install PWNlink, run:
|
||||
```
|
||||
pip3 install pwnlink
|
||||
```
|
||||
|
||||
Run the script without any arguments for automatic detection, or pass any IP address to use manual detection.
|
||||
|
||||
## Disclamier thingy
|
||||
I don't see much point to these, but I should probably put one anyways.
|
||||
|
||||
**Don't be dumb with this script.**
|
||||
|
||||
I have only used it on my own (or 5024's) routers, and did not create PWNlink with any malicious intent.
|
122
src/collections/_posts/2019-06-27-python.md
Normal file
122
src/collections/_posts/2019-06-27-python.md
Normal file
@ -0,0 +1,122 @@
|
||||
---
|
||||
layout: default
|
||||
title: Hunting snakes with a shotgun
|
||||
description: Python is a little too forgiving
|
||||
date: 2019-06-27
|
||||
tags:
|
||||
- random
|
||||
- python
|
||||
aliases:
|
||||
- /blog/2019/06/27/python
|
||||
- /blog/python
|
||||
---
|
||||
|
||||
A rather large number of people know me as "the guy who does weird things with python". I would object to this title, but it is quite accurate. So, here are some of the things I like playing with in python. None of these are actually breaking the language, just little known facts and syntax. At some point I will share about actually breaking the language. For now, enjoy the weird things I have found over the past 6 years.
|
||||
|
||||
## Type hints
|
||||
A little known feature of python is called "type hinting" (PEP 484). This is actually quite common to see in standard libraries, and has it's own special syntax:
|
||||
```python
|
||||
# Here is a regular function
|
||||
def meep(a, b):
|
||||
return a*b^2
|
||||
|
||||
# This function has no real reason to exsist, and is lacking any sort of documentation.
|
||||
# Let's add a docstring to explain what it does
|
||||
|
||||
def meep(a, b):
|
||||
""" This function returns the result of a times b squared """
|
||||
return a*b^2
|
||||
|
||||
# Ok. The docstring explains the function, but is not too helpful
|
||||
# what are a and b? what does this return?
|
||||
# For all we know, a could actually be a string (in which case, this function would return a string)
|
||||
# Let's fix that up with a type hint
|
||||
|
||||
def meep(a: int, b: int):
|
||||
""" This function returns the result of a times b squared """
|
||||
return a*b^2
|
||||
|
||||
# Thanks to the :int (called a type hint in case you didn't notice that yet), we now know that this function expects two ints.
|
||||
# Now, to finish this up with a secondary type hint to specify the return type
|
||||
def meep(a: int, b: int) -> int:
|
||||
""" This function returns the result of a times b squared """
|
||||
return a*b^2
|
||||
|
||||
# There. Now we can clearly see that this function takes too ints, and returns one int.
|
||||
# If only this was a requirement in the language. So many headaches could be solved.
|
||||
```
|
||||
|
||||
Now, keep in mind that this is called a type *hint*. The python compiler (yes.. Give me a second for that one) does not actually care if you obey the hint or not. Feel free to send incorrect data into a hinted function and see what you can break. Critical functions should both hint and check the data types being provided.
|
||||
|
||||
## Type declarations
|
||||
Just like type hints for functions, python has hints for variables too.
|
||||
```python
|
||||
# A regular variable. Must be declared with an initial value
|
||||
my_state = None
|
||||
|
||||
# my_state is None, as it has not been set, but needs to exist.
|
||||
# Let's assume that my_state is to be a state:
|
||||
class State:
|
||||
status = False
|
||||
def toggle(self):
|
||||
self.status != self.status
|
||||
|
||||
# Finally, its time to set the state to something useful
|
||||
my_state = State()
|
||||
my_state.toggle()
|
||||
|
||||
# Ok.. I hate this. Let's start by using type declarations first
|
||||
# Any variable can be un-initialized and just have a type. Like so:
|
||||
my_state: State
|
||||
|
||||
# This works for anything
|
||||
is_alive: bool
|
||||
age: int
|
||||
name: str
|
||||
|
||||
# Now, with this new knowledge, let's rewrite State
|
||||
class State:
|
||||
status: bool
|
||||
def toggle(self: State) -> None:
|
||||
self.status != self.status
|
||||
|
||||
# And initialize my_state with slightly different syntax
|
||||
my_state = State(status=True)
|
||||
```
|
||||
|
||||
I have not found much use for this yet. Hopefully there is something cool to use it for.
|
||||
|
||||
## One-line functions
|
||||
This is more common knowlage. A function can be declared in one line
|
||||
```python
|
||||
# Here is an adder function
|
||||
def adder1(a:int, b:int) -> int:
|
||||
return a+b
|
||||
|
||||
# Here is a one-line adder function
|
||||
adder2 = lambda a,b : a+b
|
||||
|
||||
# State from above can be compacted further:
|
||||
class State:
|
||||
status: bool
|
||||
toggle = lambda self: self.status != self.status
|
||||
```
|
||||
|
||||
## Ternary operations
|
||||
On the trend of one-line code, We have the one-line if/else, also known as a Ternary in more sensible languages.
|
||||
```python
|
||||
# Here is an if/else
|
||||
if 100 is 5:
|
||||
print("The world has ended")
|
||||
else:
|
||||
print("All is good")
|
||||
|
||||
# Here is a smaller if/else
|
||||
print("The world has ended" if 100 is 5 else "All is good")
|
||||
```
|
||||
|
||||
## Compiled python
|
||||
This one is interesting. Python, like Java, is compiled into bytecode. So yes, it technically is a compiled language. To see said bytecode, take a look at any `.pyc` file sitting in your `__pycache__`
|
||||
|
||||
## Blog formatting experiments
|
||||
I am still playing with post formats, and various types of content. This is more random than I usually prefer. Let me know your thoughts on the social media platform of your choosing.
|
41
src/collections/_posts/2019-07-01-devdns.md
Normal file
41
src/collections/_posts/2019-07-01-devdns.md
Normal file
@ -0,0 +1,41 @@
|
||||
---
|
||||
layout: default
|
||||
title: devDNS
|
||||
description: The DNS over devRant service
|
||||
date: 2019-07-01
|
||||
tags: project
|
||||
aliases:
|
||||
- /blog/2019/07/01/devdns
|
||||
- /blog/devdns
|
||||
---
|
||||
|
||||
Over the past year and a half, I have been hacking my way around the undocumented [devRant](https://devrant.com) auth/write API. At the request of devRant's creators, this API must not be documented due to the way logins work on the platform. That is besides the point. I have been working on a little project called [devDNS](https://devrant.com/collabs/2163502) over the past few days that uses this undocumented API. Why must I be so bad at writing intros?
|
||||
|
||||
## What is devDNS
|
||||
devDNS is a devRant bot written in python. It will serve any valid DNS query from any user on the platform. A query is just a comment in one of the following forms:
|
||||
```
|
||||
@devDNS example.com
|
||||
```
|
||||
or
|
||||
```
|
||||
@devDNS MX example.com
|
||||
```
|
||||
Of course, `MX` and `example.com` are to be replaced with the domain and entry of your choosing.
|
||||
|
||||
devDNS was inspired by [@1111Resolver](https://twitter.com/1111resolver), and the source is available on [GitHub](https://github.com/Ewpratten/devDNS).
|
||||
|
||||
## How it works
|
||||
The Python script behind devDNS is very simple. devDNS does the following every 10 seconds:
|
||||
- Fetch all new notifs
|
||||
- Find only mentions
|
||||
- Spin off a thread for each mention that passes a basic parser (Is the message 2 or 3 words long)
|
||||
- In the thread, check if the message is a control message (allows me to view the status of the bot via devRant)
|
||||
- Check if the request matches a required pattern
|
||||
- Call `dnspython` with requested record and domain
|
||||
- Receive answer from a custom [PIHole](https://pi-hole.net/) server with caching and super low latency
|
||||
- Send a comment with the results to the requester
|
||||
|
||||
Thats it! Super simple, and only two days from concept to reality.
|
||||
|
||||
## Where is this hosted?
|
||||
This program is hosted on a raspberry pi laying in my room running docker. I also have [Portainer](https://www.portainer.io/) set up so I can easily monitor the bot from my phone over my VPN.
|
105
src/collections/_posts/2019-07-06-scrapingfrcgithub.md
Normal file
105
src/collections/_posts/2019-07-06-scrapingfrcgithub.md
Normal file
@ -0,0 +1,105 @@
|
||||
---
|
||||
layout: default
|
||||
title: Scraping FRC team's GitHub accounts to gather large amounts of data
|
||||
description: There are a lot of teams...
|
||||
date: 2019-07-06
|
||||
tags: frc
|
||||
aliases:
|
||||
- /blog/2019/07/06/scrapingfrcgithub
|
||||
- /blog/scrapingfrcgithub
|
||||
---
|
||||
|
||||
I was curious about the most used languages for FRC, so I build a Python script to find out what they where.
|
||||
|
||||
## Some basic data
|
||||
Before we get to the heavy work done by my script, let's start with some general data.
|
||||
|
||||
Thanks to the [TBA API](https://www.thebluealliance.com/apidocs/v3), I know that there are 6917 registered teams. 492 of them have registered at least one account on GitHub.
|
||||
|
||||
## How the script works
|
||||
The script is split into steps:
|
||||
- Get a list of every registered team
|
||||
- Check for a github account attached to every registered team
|
||||
- If a team has an account, it is added to the dataset
|
||||
- Load each github profile
|
||||
- If it is a private account, move on
|
||||
- Use Regex to find all languages used
|
||||
- Compile data and sort
|
||||
|
||||
### Getting a list of accounts
|
||||
This is probably the simplest step in the whole process. I used the auto-generated [tbaapiv3client](https://github.com/TBA-API/tba-api-client-python) python library's `get_teams_keys(key)` function, and kept incrementing `key` until I got an empty array. All returned data was then added together into a big list of team keys.
|
||||
|
||||
### Checking for a team's github account
|
||||
The [TBA API](https://www.thebluealliance.com/apidocs/v3) helpfully provides a `/api/v3/team/<number>/social_media` API endpoint that will give the GitHub username for any team you request. (or nothing if they don't use github)
|
||||
|
||||
A `for` loop on this with a list of every team number did the trick for finding accounts.
|
||||
|
||||
### Fetching language info
|
||||
To remove the need for an Oauth login to use the script, GitHub data is retrieved using standard HTTPS requests instead of AJAX requests to the API. This gets around the tiny rate limit, but takes a bit longer to complete.
|
||||
|
||||
To check for language usage, a simple Regex pattern can be used: `/programmingLanguage"\>(.*)\</gm`
|
||||
|
||||
When combined with an `re.findall()`, this pattern will return a list of all recent languages used by a team.
|
||||
|
||||
|
||||
### Data saves / backup solution
|
||||
To deal with the fact that large amounts of data are being requested, and people might want to pause the script, I have created a system to allow for "savestates".
|
||||
|
||||
On launch of the script, it will check for a `./data.json` file. If this does not exist, one will be created. Otherwise, the contents will be read. This file contains both all the saved data, and some counters.
|
||||
|
||||
Each stage of the script contains a counter, and will increment the counter every time a team has been processed. This way, if the script is stopped and restarted, the parsers will just keep working from where they left off. This was very helpful when writing the script as, I needed to stop and start it every time I needed to implement a new feature.
|
||||
|
||||
All parsing data is saved to the json file every time the script completes, or it detects a `SIGKILL`.
|
||||
|
||||
## What I learned
|
||||
After letting the script run for about an hour, I got a bunch of data from every registered team.
|
||||
|
||||
This data includes every project (both on and offseason) from each team, so teams that build t-shirt cannons using the CTRE HERO, would have C# in their list of languages. Things like that.
|
||||
|
||||
Unsurprisingly, by far the most popular programming language is Java, with 3232 projects. These projects where all mostly, or entirely written in Java. Next up, we have C++ with 725 projects, and Python with 468 projects.
|
||||
|
||||
After Java, C++, and Python, we start running in to languages used for dashboards, design, lessons, and offseason projects. Before I get to everything else, here is the usage of the rest of the valid languages for FRC robots:
|
||||
- C (128)
|
||||
- LabView (153)
|
||||
- Kotlin (96)
|
||||
- Rust (4)
|
||||
|
||||
Now, the rest of the languages below Python:
|
||||
```
|
||||
295 occurrences of JavaScript
|
||||
153 occurrences of LabVIEW
|
||||
128 occurrences of C
|
||||
96 occurrences of Kotlin
|
||||
72 occurrences of Arduino
|
||||
71 occurrences of C#
|
||||
69 occurrences of CSS
|
||||
54 occurrences of PHP
|
||||
40 occurrences of Shell
|
||||
34 occurrences of Ruby
|
||||
16 occurrences of Swift
|
||||
16 occurrences of Jupyter Notebook
|
||||
15 occurrences of Scala
|
||||
12 occurrences of D
|
||||
12 occurrences of TypeScript
|
||||
9 occurrences of Dart
|
||||
8 occurrences of Processing
|
||||
7 occurrences of CoffeeScript
|
||||
6 occurrences of Go
|
||||
6 occurrences of Groovy
|
||||
6 occurrences of Objective-C
|
||||
4 occurrences of Rust
|
||||
3 occurrences of MATLAB
|
||||
3 occurrences of R
|
||||
1 occurrences of Visual Basic
|
||||
1 occurrences of Clojure
|
||||
1 occurrences of Cuda
|
||||
```
|
||||
|
||||
I have removed markup and shell languages from that list because most of them are probably auto-generated.
|
||||
|
||||
In terms of github account names, 133 teams follow FRC convention and use a username starting with `frc`, followed by their team number, 95 teams use `team` then their number, and 264 teams use something else.
|
||||
|
||||
## Using the script
|
||||
This script is not on PYPI this time. You can obtain a copy from my GitHub repo: [https://github.com/Ewpratten/frc-code-stats](https://github.com/Ewpratten/frc-code-stats)
|
||||
|
||||
First, make sure both `python3.7` and `python3-pip` are installed on your computer. Next, delete the `data.json` file. Then, install the requirements with `pip3 install -r requirements.txt`. Finally, run with `python3 main.py` to start the script. Now, go outside and enjoy nature for about an hour, and your data should be loaded!.
|
33
src/collections/_posts/2019-07-13-lookback-gmad.md
Normal file
33
src/collections/_posts/2019-07-13-lookback-gmad.md
Normal file
@ -0,0 +1,33 @@
|
||||
---
|
||||
layout: default
|
||||
title: Taking a look back at GMAD
|
||||
description: Fun, Simple, and Quick
|
||||
date: 2019-07-13
|
||||
tags:
|
||||
- project
|
||||
aliases:
|
||||
- /blog/2019/07/13/lookback-gmad
|
||||
- /blog/lookback-gmad
|
||||
---
|
||||
|
||||
One day, back in June of 2018, I was both looking for a new project to work on, and trying to decide which Linux distro to install on one of my computers. From this, a little project was born. [Give Me a Distro](http://ewpratten.retrylife.ca/GiveMeADistro/) (or, GMAD, as I like to call it) is a little website that chooses a random distribution of Linux and shows a description of what you are about to get yourself into, and a download link for the latest ISO.
|
||||
|
||||
## Backend tech
|
||||
This is one of the simplest projects I have ever made. All the backend does is:
|
||||
- Select a random number (n)
|
||||
- Fetch the nth item from a list of distros
|
||||
- Push the selected data to the user via DOM
|
||||
|
||||
## Frontend
|
||||
This website is just plain HTML and CSS3, built without any CSS framework.
|
||||
|
||||
## My regrets
|
||||
There are two things I do not like about this project. Firstly, on load, the site breifly suggests Arch Linux before flashing to the random selection. This is due to the fact that Arch is the default for people with Javascript disabled. Some kind of loading animation would fix this.
|
||||
|
||||
Secondly, the version of the site hosted on [retrylife.ca](https://retrylife.ca/gmad) is actually just an iframe to [ewpratten.github.io](https://ewpratten.github.io/GiveMeADistro) due to some CNAME issues.
|
||||
|
||||
## Contributing
|
||||
If you would like to add a distro or three to the website, feel free to make a pull request over on [GitHub](https://github.com/Ewpratten/GiveMeADistro).
|
||||
|
||||
## Why make a post about it a year later?
|
||||
I just really enjoyed working with the project and sharing it with friends, so I figured I should mention it here too. Maybe it will inspire someone to make something cool!
|
133
src/collections/_posts/2019-07-15-mindmap.md
Normal file
133
src/collections/_posts/2019-07-15-mindmap.md
Normal file
@ -0,0 +1,133 @@
|
||||
---
|
||||
layout: default
|
||||
title: Mind map generation with Python
|
||||
description: Step 1
|
||||
date: 2019-07-15
|
||||
aliases:
|
||||
- /blog/2019/07/15/mindmap
|
||||
- /blog/mindmap
|
||||
---
|
||||
|
||||
While working on an assignment with [Coggle](https://coggle.it) today, I noticed an interesting option in the save menu. *Download as .mm file*. Having rarely worked with mind maps before, and only doing it online, it never occured to me that someone would have a file format for it. So I took a look.
|
||||
|
||||
## What is a .mm file?
|
||||
It turns out, a `.mm` file is just some XML describing the mind map. Here is a simple mind map:
|
||||
|
||||

|
||||
|
||||
And again as a `.mm` file:
|
||||
|
||||
```xml
|
||||
<map version="0.9.0">
|
||||
<node TEXT="Master Node" FOLDED="false" POSITION="right" ID="5d2d02b1a315dd0879f48c1c" X_COGGLE_POSX="0" X_COGGLE_POSY="0">
|
||||
<edge COLOR="#b4b4b4"/>
|
||||
<font NAME="Helvetica" SIZE="17"/>
|
||||
<node TEXT="Child branch" FOLDED="false" POSITION="right" ID="f72704969525d2a0333dd635">
|
||||
<edge COLOR="#7aa3e5"/>
|
||||
<font NAME="Helvetica" SIZE="15"/>
|
||||
<node TEXT="Children 1" FOLDED="false" POSITION="right" ID="c83826af506cae6e55761d5c">
|
||||
<edge COLOR="#7ea7e5"/>
|
||||
<font NAME="Helvetica" SIZE="13"/>
|
||||
</node>
|
||||
<node TEXT="Children 2" FOLDED="false" POSITION="right" ID="47723a4d0fb766863f70d204">
|
||||
<edge COLOR="#82aae7"/>
|
||||
<font NAME="Helvetica" SIZE="13"/>
|
||||
</node>
|
||||
</node>
|
||||
</node>
|
||||
</map>
|
||||
```
|
||||
|
||||
Neat, right?
|
||||
|
||||
## What can we do with it?
|
||||
I have not done much research about this because I wanted to work all of this out on my own. But I know one thing as a fact: working with XML sucks (especially in Python). I decided that this would be much better if I could load `.mm` files as JSON. This would allow easy manipulation and some cool projects.
|
||||
|
||||
## My script
|
||||
Like everything I do, I made a script to play with these files.
|
||||
|
||||
It's pretty simple. First, It loads a `.mm` file, then parses it into a `list` of `xml.etree.ElementTree.Element`.
|
||||
|
||||
```python
|
||||
raw_mm = ""
|
||||
|
||||
with open(args.file, "r") as fp:
|
||||
raw_mm = fp.read()
|
||||
fp.close()
|
||||
|
||||
xml = ET.fromstring(raw_mm)
|
||||
```
|
||||
|
||||
The parsed `list` is then passed into a recursive function that constructs a `dict`
|
||||
|
||||
```python
|
||||
def xmlToDict(xml):
|
||||
output = []
|
||||
for elem in list(xml):
|
||||
|
||||
if "TEXT" not in elem.attrib:
|
||||
continue
|
||||
|
||||
name = elem.attrib['TEXT']
|
||||
json_element = {"name": name}
|
||||
|
||||
try:
|
||||
json_element["children"] = xmlToDict(elem)
|
||||
except:
|
||||
continue
|
||||
|
||||
# Detect node type
|
||||
if json_element["children"]:
|
||||
json_element["type"] = "branch"
|
||||
else:
|
||||
json_element["type"] = "leaf"
|
||||
del json_element["children"]
|
||||
|
||||
output.append(json_element)
|
||||
|
||||
return output
|
||||
```
|
||||
|
||||
Finally, the `dict` is written to a file with `json.dump`
|
||||
|
||||
```python
|
||||
json.dump(mind_map, open(args.file + ".json", "w"))
|
||||
```
|
||||
|
||||
The whole script (with comments) can be found on my [GitHub account](https://gist.github.com/Ewpratten/0d8f7c7371380c9ca8adcfc6502ccf84#file-parser-py).
|
||||
|
||||
## The output
|
||||
Running the `.mm` file from above through the script gives:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"name":"Master Node",
|
||||
"children":[
|
||||
{
|
||||
"name":"Child branch",
|
||||
"children":[
|
||||
{
|
||||
"name":"Children 1",
|
||||
"type":"leaf"
|
||||
},
|
||||
{
|
||||
"name":"Children 2",
|
||||
"type":"leaf"
|
||||
}
|
||||
],
|
||||
"type":"branch"
|
||||
}
|
||||
],
|
||||
"type":"branch"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
## The next step
|
||||
This script just translates a `.mm` file to JSON. Nothing else. Next, I want to convert this to a library, and add a JSON to `.mm` function as well. This leads into my ultimate goal for this project.
|
||||
|
||||
I want a script that I can drop in the root of any project to build a [Gource](https://gource.io/)-style visualization of the folder structure. This will give me a way to make cool visualizations for lessons on the robotics team. On top of the folder visualization, Coggle's new flowchart feature can be used to generate graphical representations of @frc5024's codebases. This could give me an interactive overview of the work being done by our team.
|
||||
|
||||
### Further learning
|
||||
crm.org has done a great writeup of [Coggle, and some of it's features](https://crm.org/news/free-flowin-mind-maps-with-coggle). If you are looking to learn more about the tool, I recommend taking a few minute to read their post.
|
45
src/collections/_posts/2019-08-10-why-i-carry-nfc.md
Normal file
45
src/collections/_posts/2019-08-10-why-i-carry-nfc.md
Normal file
@ -0,0 +1,45 @@
|
||||
---
|
||||
layout: default
|
||||
title: My weird piece of EDC
|
||||
description: Reasons why I always carry NFC cards with me
|
||||
date: 2019-08-10
|
||||
tags:
|
||||
- random
|
||||
aliases:
|
||||
- /blog/2019/08/10/why-i-carry-nfc
|
||||
- /blog/why-i-carry-nfc
|
||||
---
|
||||
|
||||
Im back with a quick little post about something I cary with me everywhere I go, EDC (Every-Day Carry) if you will.
|
||||
|
||||
## How this started
|
||||
Earlier this year, my friend @hyperliskdev showed me a piece of "fake ID" he was given as a joke. After some experimentation, he noticed that, upon tapping it to his phone, he would get an error message about an un-formatted card.
|
||||
|
||||
After hearing of this, I opened up [NFC Tools](https://play.google.com/store/apps/details?id=com.wakdev.nfctools.pro) on my phone and started playing. We had quite some fun with [various settings and data](#shenanigans), and I decided that I wanted a card too. I send a message to someone that I knew worked with these, and got myself 4 to play with.
|
||||
|
||||
## Shenanigans
|
||||
Upon figuring out how to write to @hyperliskdev's card, we started out simple. We sent bits of text to eachother, and I eventually sent him a copy of my contact information, and bitcoin address. Then, came the real fun..
|
||||
|
||||
By setting the data type to `external link`, and the content to [this totally not suspicious URL](https://www.youtube.com/watch?v=dQw4w9WgXcQ), we now had the perfect tool for derailing a lesson. An automatic [Rick Roll](https://en.wikipedia.org/wiki/Rickrolling) card. Upon tapping this card to a phone, the youtube app would auto-play *Rick Astley's Never Gonna Give You Up*. After this discovery, people started asking to buy pre-configured cards from me 😆.
|
||||
|
||||
After this came even more fun ideas:
|
||||
- Enabling flashlights
|
||||
- Rebooting phones
|
||||
- Calling phone numbers
|
||||
- Sending texts
|
||||
- Filling phones with fake contacts
|
||||
|
||||
## Practical use
|
||||
I don't actually carry my cards around for messing with people but instead, use them for things like:
|
||||
- Cloning hotel access cards (being in a room of 4 with only 2 cards)
|
||||
- Creating login cards for school printers (so I don't have to log in manually)
|
||||
- Sharing small amounts of data and links between phones
|
||||
- Giving my contact info to people
|
||||
|
||||
Thanks to the NFC Tools app, pretty much everything is 3 taps and a swipe away. I strongly recommend picking up some cards for yourself if wou work with a large number of NFC-compatible systems.
|
||||
|
||||
|
||||
## A/N
|
||||
Occasionally, I either have nothing in the works, or am working on some very boring and technical projects, so I look to post some fun content like this. Currently the latter of the options is true, and I wanted a quick break from writing networking code.
|
||||
|
||||
Let me know what you think of this type of content!
|
96
src/collections/_posts/2019-08-11-setting-up-ja.md
Normal file
96
src/collections/_posts/2019-08-11-setting-up-ja.md
Normal file
@ -0,0 +1,96 @@
|
||||
---
|
||||
layout: default
|
||||
title: How I set up ひらがな input on my laptop
|
||||
description: I3wm makes everything 10x harder than it should be
|
||||
date: 2019-08-12
|
||||
tags:
|
||||
- languages
|
||||
- walkthrough
|
||||
- linux
|
||||
aliases:
|
||||
- /blog/2019/08/12/setting-up-ja
|
||||
- /blog/setting-up-ja
|
||||
draft: false
|
||||
---
|
||||
|
||||
I am currently working with [Hiragana](https://en.wikipedia.org/wiki/Hiragana), [Katakana](https://en.wikipedia.org/wiki/Katakana), and, [Kanji](https://en.wikipedia.org/wiki/Kanji) in some projects, and needed a more reliable way to write than running some [romaji](https://en.wikipedia.org/wiki/Romanization_of_Japanese) through an online translator. So, this post will detail what I did to enable native inputs on my laptop. This guide is specifically for [i3wm](https://i3wm.org/), because it does not obey system settings for languages and inputs.
|
||||
|
||||
## Adding font support to Linux
|
||||
Firstly, we need fonts. Depending on your system, these may already be installed. For Japanese, I only used `vlgothic`, so here in the package for it:
|
||||
```
|
||||
sudo apt install fonts-vlgothic
|
||||
```
|
||||
|
||||
## Language support
|
||||
Im not sure if this matters, but I have seen other people do it, so why not be safe?
|
||||
|
||||
I am currently running a stock Ubuntu [18.04](https://releases.ubuntu.com/18.04.5/) base, which means that everything is pre-configured for Gnome. To set language support in Gnome, pull up the settings panel:
|
||||
```bash
|
||||
# This line fixes some compatibility issues between
|
||||
# Gnome and I3 when launching the settings menu.
|
||||
# I recommend aliasing it.
|
||||
env XDG_CURRENT_DESKTOP=GNOME gnome-control-center
|
||||
```
|
||||
|
||||

|
||||
|
||||
Next, go to *Settings > Language and Region > Input Sources*, and click on *Manage Installed Languages*.
|
||||
This will bring up a window where you can select a new language to install. From here, I clicked on *Install / Remove Language*.
|
||||
|
||||

|
||||
|
||||
In this list, I just selected the languages I wanted (English and Japanese), and applied my changes. You may be asked to enter your password while installing the new languages. Once installation is complete, log out, and in again.
|
||||
|
||||
With the new language support installed, return to the *Input Sources* settings, and press the `+` button to add a new language. From here, search the language you want (it may be under *Other*) and select it. For Japanese, select the `mozc` variant.
|
||||
|
||||
Gnome's language settings are now configured. If you are using Gnome (not I3), you can stop here.
|
||||
|
||||
## Configuring ibus
|
||||
Don't get me wrong, I love I3wm, but sometimes it's configurability drives me crazy.
|
||||
|
||||
After searching through various forums and wikis looking for an elegant way to switch languages in I3, I found a link to an [ArchWiki page](https://wiki.archlinux.org/index.php/IBus) at the bottom of a mailing list (I blame Google for not showing this sooner). It turns out that a program called `ibus` is exactly what I needed. Here is how to set it up:
|
||||
|
||||
Remember `mozc` from above? If you are not using it, this package may not work. Search for the appropriate `ibus-` package for your selected language(s).
|
||||
```bash
|
||||
# Install ibus-mozc for Japanese (mozc)
|
||||
sudo apt install ibus-mozc
|
||||
```
|
||||
|
||||
Now that `ibus` is installed, run the setup script:
|
||||
```bash
|
||||
ibus-setup
|
||||
```
|
||||
|
||||

|
||||
|
||||
From here, set your shortcut to something not used by I3 (I chose `CTRL+Shift+Space`, but most people prefer `Alt+Space`), and enable the system tray icon.
|
||||
Now, go to the *Input Method* settings.
|
||||
|
||||

|
||||
|
||||
From here, press the `+`, and add your language(s).
|
||||
|
||||
|
||||
## Configuring .profile
|
||||
According to the Wiki page, I needed to add the following to my `~/.profile`:
|
||||
```bash
|
||||
# Language support
|
||||
export GTK_IM_MODULE=ibus
|
||||
export XMODIFIERS=@im=ibus
|
||||
export QT_IM_MODULE=ibus
|
||||
ibus-daemon -d -x
|
||||
```
|
||||
|
||||
It turns out that this [causes issues with some browsers](https://github.com/ibus/ibus/issues/2020), so I actually put *this* in my `~/.profile` instead:
|
||||
```bash
|
||||
# Language support
|
||||
export GTK_IM_MODULE=xim
|
||||
export XMODIFIERS=@im=ibus
|
||||
export QT_IM_MODULE=xim
|
||||
ibus-daemon -drx
|
||||
```
|
||||
|
||||
Now, log out and in again to let ibus properly start again, and there should now be a new applet in your bar for language settings.
|
||||
|
||||
## Workflow
|
||||
`ibus` runs in the background and will show an indication of your selected language upon pressing the keyboard shortcut set in the [setup tool](#configuring-ibus). For languages like Japanese, where it's writing systems do not use the English / Latin-based alphabets, `ibus` will automatically convert your words as you type (this behavior will be different from language to language).
|
83
src/collections/_posts/2019-08-24-shift2.md
Normal file
83
src/collections/_posts/2019-08-24-shift2.md
Normal file
@ -0,0 +1,83 @@
|
||||
---
|
||||
layout: default
|
||||
title: Keyed data encoding with Python
|
||||
description: XOR is pretty cool
|
||||
date: 2019-08-24
|
||||
tags:
|
||||
- project
|
||||
- python
|
||||
aliases:
|
||||
- /blog/2019/08/24/shift2
|
||||
- /blog/shift2
|
||||
---
|
||||
|
||||
I have always been interested in text and data encoding, so last year, I made my first encoding tool. [Shift64](https://github.com/Ewpratten/shift64) was designed to take plaintext data with a key, and convert it into a block of base64 that could, in theory, only be decoded with the original key. I had a lot of fun with this tool, and a very stripped down version of it actually ended up as a bonus question on the [5024 Programming Test](https://github.com/frc5024/Programming-Test/blob/master/test.md) for 2018/2019. Yes, the key was in fact `5024`.
|
||||
|
||||
This tool had some issues. Firstly, the code was a mess and only accepted hard-coded values. This made it very impractical as an every-day tool, and a nightmare to continue developing. Secondly, the encoder made use of entropy bits, and self modifying keys that would end up producing encoded files >1GB from just the word *hello*.
|
||||
|
||||
## Shift2
|
||||
One of the oldest items on my TODO list has been to rewrite shift64, so I made a brand new tool out of it. [Shift2](https://github.com/Ewpratten/shift) is both a command-line tool, and a Python3 library that can efficiently encode and decode text data with a single key (unlike shift64, which used two keys concatenated into a single string, and separated by a colon).
|
||||
|
||||
### How it works
|
||||
Shift2 has two inputs. A `file`, and a `key`. These two strings are used to produce a single output, the `message`.
|
||||
|
||||
When encoding a file, shift2 starts by encoding the raw data with [base85](https://en.wikipedia.org/wiki/Ascii85), to ensure that all data being passed to the next stage can be represented as a UTF-8 string (even binary data). This base85 data is then XOR encrypted with a rotating key. This operation can be expressed with the following (this example ignores the base85 encoding steps):
|
||||
```python
|
||||
file = "Hello reader! I am some input that needs to be encoded"
|
||||
key = "ewpratten"
|
||||
|
||||
message = ""
|
||||
|
||||
for i, char in enumerate(file):
|
||||
message += chr(
|
||||
ord(char) ^ ord(key[i % len(key) - 1])
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
The output of this contains non-displayable characters. A second base85 encoding is used to fix this. Running the example snippet above, then base85 encoding the `message` once results in:
|
||||
```
|
||||
CIA~89YF>W1PTBJQBo*W6$nli7#$Zu9U2uI5my8n002}A3jh-XQWYCi2Ma|K9uW=@5di
|
||||
```
|
||||
|
||||
If using the shift2 commandline tool, you would see a different output:
|
||||
```
|
||||
B2-is8Y&4!ED2H~Ix<~LOCfn@P;xLjM_E8(awt`1YC<SaOLbpaL^T!^W_ucF8Er~?NnC$>e0@WAWn2bqc6M1yP+DqF4M_kSCp0uA5h->H
|
||||
```
|
||||
|
||||
This is for a few reasons. Firstly, as mentioned above, shift2 uses base85 **twice**. Once before, and once after XOR encryption. Secondly, a file header is prepended to the output to help the decoder read the file. This header contains version info, the file length, and the encoding type.
|
||||
|
||||
### Try it yourself with PIP
|
||||
I have published shift2 on [pypi.org](https://pypi.org/project/shift-tool/) for use with PIP. To install shift2, ensure both `python3` and `python3-pip` are installed on your computer, then run:
|
||||
```sh
|
||||
# Install shift2
|
||||
pip3 install shift-tool
|
||||
|
||||
# View the help for shift2
|
||||
shift2 -h
|
||||
```
|
||||
|
||||
<div id="demo" markdown="1">
|
||||
|
||||
### Try it in the browser
|
||||
I have ported the core code from shift2 to [run in the browser](http://www.brython.info/index.html). This demo is entirely client-side, and may take a few seconds to load depending on your device.
|
||||
|
||||
<input type="radio" id="encode" name="shift-action" value="encode" checked>
|
||||
<label for="encode">Encode</label>
|
||||
<input type="radio" id="decode" name="shift-action" value="decode">
|
||||
<label for="decode">Decode</label>
|
||||
|
||||
<input type="text" id="key" name="key" placeholder="Encoding key" required><br>
|
||||
<input type="text" id="msg" name="msg" placeholder="Message" required size="30">
|
||||
|
||||
<button type="button" class="btn btn-primary" id="shift-button" disabled>shift2 demo is loading... (this may take a few seconds)</button>
|
||||
|
||||
</div>
|
||||
|
||||
### Future plans
|
||||
Due to the fact that shift2 can also be used as a library (as outlined in the [README](https://github.com/Ewpratten/shift/blob/master/README.md)), I would like to write a program that allows users to talk to eachother IRC style over a TCP port. This program would use either a pre-shared, or generated key to encode / decode messages on the fly.
|
||||
|
||||
If you are interested in helping out, or taking on this idea for yourself, send me an email.
|
||||
|
||||
<!-- Python code -->
|
||||
<script type="text/python" src="/assets/python/shift2/shift2demo.py"></script>
|
54
src/collections/_posts/2019-08-27-github-cleanup.md
Normal file
54
src/collections/_posts/2019-08-27-github-cleanup.md
Normal file
@ -0,0 +1,54 @@
|
||||
---
|
||||
layout: default
|
||||
title: I did some cleaning
|
||||
description: Spring cleaning is fun when it isn't spring, and a computer does all
|
||||
the work
|
||||
date: 2019-08-27
|
||||
tags:
|
||||
- random
|
||||
- github
|
||||
aliases:
|
||||
- /blog/2019/08/27/github-cleanup
|
||||
- /blog/github-cleanup
|
||||
---
|
||||
|
||||
As I am continuing to check items off my TODO list before school starts, I have come to an item I have been putting off for a while. **Clean up GitHub Account**. Luckily, I discovered a little trick to make the process of deleting unused repos a little easier!
|
||||
|
||||
## Getting a list of repos to delete
|
||||
I could have automated this, but I prefer a little control. To get the list, start by opening up a new Firefox window with a single tab. In this tab, open your GitHub profile to the list of repos.
|
||||
Starting from the top, scroll through, and middle click on anything you want to delete. This opens it in a new tab.
|
||||
|
||||
Once you have a bunch of tabs open with repos to remove, use [this Firefox plugin](https://addons.mozilla.org/en-US/firefox/addon/urls-list/) to create a plaintext list of every link you opened, and paste the list of links into VS-code.
|
||||
|
||||
## Getting an API token
|
||||
Next, an API token is needed. Go to GitHub's [token settings](https://github.com/settings/tokens), and generate a new one (make sure to enable repository deletion).
|
||||
|
||||
## "Parsing" the links
|
||||
With our new token, and out VS-code file, we can start "parsing" the data.
|
||||
|
||||
Pressing `CTRL + F` brings up the Find/Search toolbar. In the text box, there are a few icons. Pressing the one farthest to the right will enable [Regex](https://en.wikipedia.org/wiki/Regular_expression) mode. With this set, paste the following:
|
||||
```
|
||||
https://github.com/
|
||||
```
|
||||
|
||||
Now, click the arrow on the left to enable *replace mode*, and put this in the new box:
|
||||
```
|
||||
curl -XDELETE -H 'Authorization: token <API token from above>' "https://api.github.com/repos/
|
||||
```
|
||||
|
||||
Then press *replace all*.
|
||||
|
||||
Finally, replace the contents of the first box with:
|
||||
```
|
||||
\n
|
||||
```
|
||||
|
||||
and the second with:
|
||||
```
|
||||
"\n
|
||||
```
|
||||
|
||||
and *replace all* again.
|
||||
|
||||
## Deleting the repos
|
||||
Simply copy the entire text file that was made, and paste it in a terminal, then press \<enter\> (this will take a while)
|
67
src/collections/_posts/2019-09-11-buildingimgfrombin.md
Normal file
67
src/collections/_posts/2019-09-11-buildingimgfrombin.md
Normal file
@ -0,0 +1,67 @@
|
||||
---
|
||||
layout: default
|
||||
title: Building images from binary data
|
||||
description: Simple, yet fun
|
||||
date: 2019-09-11
|
||||
tags:
|
||||
- python
|
||||
- images
|
||||
- project
|
||||
redirect_from:
|
||||
- /post/ef7b3166/
|
||||
- /ef7b3166/
|
||||
aliases:
|
||||
- /blog/2019/09/11/buildingimgfrombin
|
||||
- /blog/buildingimgfrombin
|
||||
extra:
|
||||
js_import:
|
||||
- https://platform.twitter.com/widgets.js
|
||||
|
||||
---
|
||||
|
||||
During a computer science class today, we were talking about embedding code and metadata in *jpg* and *bmp* files. @exvacuum was showing off a program he wrote that watched a directory for new image files, and would display them on a canvas. He then showed us a special image. In this image, he had injected some metadata into the last few pixels, which were not rendered, but told his program where to position the image on the canvas, and it's size.
|
||||
|
||||
This demo got @hyperliskdev and I thinking about what else we can do with image data. After some talk, the idea of converting application binaries to images came up. I had seen a blog post about visually decoding [OOK data](https://en.wikipedia.org/wiki/On%E2%80%93off_keying) by converting an [IQ capture](http://www.ni.com/tutorial/4805/en/) to an image. With a little adaptation, I did the same for a few binaries on my laptop.
|
||||
|
||||
|
||||
<!-- Tweet embed -->
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">I present: "Parts of <a href="https://twitter.com/GIMP_Official?ref_src=twsrc%5Etfw">@GIMP_Official</a>'s binary, represented as a bitmap" <a href="https://t.co/iLljdE4nlK">pic.twitter.com/iLljdE4nlK</a></p>— Evan Pratten (@ewpratten) <a href="https://twitter.com/ewpratten/status/1171801959197794304?ref_src=twsrc%5Etfw">September 11, 2019</a></blockquote>
|
||||
|
||||
## Program design
|
||||
Like all ideas I have, I wrote some code to test this idea out. Above is a small sample of the interesting designs found in the [gimp](https://www.gimp.org/) binary. The goals for this script were to:
|
||||
|
||||
- Accept any file of any type or size
|
||||
- Allow the user to select the file dimensions
|
||||
- Generate an image
|
||||
- Write the data in a common image format
|
||||
|
||||
If you would like to see how the code works, read "*check out the script*".
|
||||
|
||||
## A note on data wrapping
|
||||
By using a [generator](https://wiki.python.org/moin/Generators), and the [range function](https://docs.python.org/3/library/functions.html#func-range)'s 3rd argument, any list can be easily split into a 2d list at a set interval.
|
||||
|
||||
```python
|
||||
# Assuming l is a list of data, and n is an int that denotes the desired split location
|
||||
for i in range(0, len(l), n):
|
||||
yield l[i:i + n]
|
||||
```
|
||||
|
||||
### Binaries have a habit of not being rectangular
|
||||
Unlike photos, binaries are not generated from rectangular image sensors, but instead from compilers and assemblers (and sometimes hand-written binary). These do not generate perfect rectangles. Due to this, my script simply removes the last line from the image to "reshape" it. I may end up adding a small piece of code to pad the final line instead of stripping it in the future.
|
||||
|
||||
## Other file types
|
||||
I also looked at other file types. Binaries are very interesting because they follow very strict ordering rules. I was hoping that a `wav` file would do something similar, but that does not appear to be the case. This is the most interesting pattern I could find in a `wav` file:
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Following up my previous post with a tiny segment of an audio file. This one is little less interesting <a href="https://t.co/u9EFloxnK5">pic.twitter.com/u9EFloxnK5</a></p>— Evan Pratten (@ewpratten) <a href="https://twitter.com/ewpratten/status/1171883910827040774?ref_src=twsrc%5Etfw">September 11, 2019</a></blockquote>
|
||||
|
||||
Back to executable data, these are small segments of a `dll` file:
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## Check out the script
|
||||
This script is hosted [on my GitHub account](https://github.com/Ewpratten/binmap) as a standalone file. Any version of python3 should work, but the following libraries are needed:
|
||||
|
||||
- Pillow
|
||||
- Numpy
|
97
src/collections/_posts/2019-10-05-billwurtz.md
Normal file
97
src/collections/_posts/2019-10-05-billwurtz.md
Normal file
@ -0,0 +1,97 @@
|
||||
---
|
||||
layout: default
|
||||
title: Using an RNN to generate Bill Wurtz notes
|
||||
description: Textgenrnn is fun
|
||||
date: 2019-10-05
|
||||
tags:
|
||||
- project
|
||||
- walkthrough
|
||||
- python
|
||||
redirect_from:
|
||||
- /post/99g9j2r90/
|
||||
- /99g9j2r90/
|
||||
aliases:
|
||||
- /blog/2019/10/05/billwurtz
|
||||
- /blog/billwurtz
|
||||
---
|
||||
|
||||
[Bill Wurtz](https://billwurtz.com/) is an American musician who became [reasonably famous](https://socialblade.com/youtube/user/billwurtz/realtime) through short musical videos posted to Vine and YouTube. I was searching through his website the other day, and stumbled upon a page labeled [*notebook*](https://billwurtz.com/notebook.html), and thought I should check it out.
|
||||
|
||||
Bill's notebook is a large (about 580 posts) collection of random thoughts, ideas, and sometimes just collections of words. A prime source of entertainment, and neural network inputs..
|
||||
|
||||
> *"If you are looking to burn something, fire may be just the ticket"* - Bill Wurtz
|
||||
|
||||
## Choosing the right tool for the job
|
||||
If you haven't noticed yet, Im building a neural net to generate notes based on his writing style and content. Anyone who has read [my first post](@/blog/2018-06-27-BecomeRanter.md) will know that I have already done a similar project in the past. This means *time to reuse come code*!
|
||||
|
||||
For this project, I decided to use an amazing library by @minimaxir called [textgenrnn](https://github.com/minimaxir/textgenrnn). This Python library will handle all of the heavy (and light) work of training an RNN on a text dataset, then generating new text.
|
||||
|
||||
## Building a dataset
|
||||
This project was a joke, so I didn't bother with properly grabbing each post, categorizing them, and parsing them. Instead, I build a little script to pull every HTML file from Bill's website, and regex out the body. This ended up leaving some artifacts in the output, but I don't really mind.
|
||||
|
||||
```python
|
||||
import re
|
||||
import requests
|
||||
|
||||
|
||||
def loadAllUrls():
|
||||
page = requests.get("https://billwurtz.com/notebook.html").text
|
||||
|
||||
links = re.findall(r"HREF=\"(.*)\"style", page)
|
||||
|
||||
return links
|
||||
|
||||
|
||||
def dumpEach(urls):
|
||||
for url in urls:
|
||||
page = requests.get(f"https://billwurtz.com/{url}").text.strip().replace(
|
||||
"</br>", "").replace("<br>", "").replace("\n", " ")
|
||||
|
||||
data = re.findall(r"</head>(.*)", page, re.MULTILINE)
|
||||
|
||||
# ensure data
|
||||
if len(data) == 0:
|
||||
continue
|
||||
|
||||
print(data[0])
|
||||
|
||||
|
||||
urls = loadAllUrls()
|
||||
print(f"Loaded {len(urls)} pages")
|
||||
dumpEach(urls)
|
||||
|
||||
```
|
||||
|
||||
This script will print each of Bill's notes to the console (on it's own line). I used a simple redirect to write this to a file.
|
||||
|
||||
```sh
|
||||
python3 scrape.py > posts.txt
|
||||
```
|
||||
|
||||
## Training
|
||||
To train the RNN, I just used some of textgenrnn's example code to read the posts file, and build an [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format) file to store the RNN's neurons.
|
||||
|
||||
```python
|
||||
from textgenrnn import textgenrnn
|
||||
|
||||
generator = textgenrnn()
|
||||
generator.train_from_file("/path/to/posts.txt", num_epochs=100)
|
||||
```
|
||||
|
||||
This takes quite a while to run, so I offloaded it to a [Droplet](https://www.digitalocean.com/products/droplets/), and left it running overnight.
|
||||
|
||||
## The results
|
||||
Here are some of my favorite generated notes:
|
||||
|
||||
> *"note: do not feel better"*
|
||||
|
||||
> *"hi I am a car."*
|
||||
|
||||
> *"i am stuff and think about this before . this is it, the pond. how do they make me feel better?"*
|
||||
|
||||
> *"i am still about the floor"*
|
||||
|
||||
Not perfect, but it is readable english, so i call it a win!
|
||||
|
||||
## Play with the code
|
||||
I have uploaded the basic code, the scraped posts, and a partial hdf5 file [to GitHub](https://github.com/Ewpratten/be-bill) for anyone to play with. Maybe make a twitter bot out of this?
|
188
src/collections/_posts/2019-11-18-realtime-robot-code.md
Normal file
188
src/collections/_posts/2019-11-18-realtime-robot-code.md
Normal file
@ -0,0 +1,188 @@
|
||||
---
|
||||
layout: default
|
||||
title: Programming a live robot
|
||||
description: Living on the edge is an understatement
|
||||
date: 2019-11-20
|
||||
tags: random frc
|
||||
redirect_from:
|
||||
- /post/e9gdhj90/
|
||||
- /e9gdhj90/
|
||||
aliases:
|
||||
- /blog/2019/11/20/realtime-robot-code
|
||||
- /blog/realtime-robot-code
|
||||
---
|
||||
|
||||
> *"So.. what if we could skip asking for driver inputs, and just have the robot operators control the bot through a commandline interface?"*
|
||||
|
||||
This is exactly the kind of question I randomly ask while sitting in the middle of class, staring at my laptop. So, here is a post about my real-time programming adventure!
|
||||
|
||||
## Geting started
|
||||
|
||||
To get started, I needed a few things. Firstly, I have a laptop running Linux. This allows me to use [SSH](https://en.wikipedia.org/wiki/Secure_Shell) and [SCP](https://en.wikipedia.org/wiki/Secure_copy). There are Windows versions of both of these programs, but I find the "linux experience" easier to use. Secondly, I have grabbed one of [5024](https://www.thebluealliance.com/team/5024)'s [robots](https://cs.5024.ca/webdocs/docs/robots) to be subjected to my experiment. The components I care about are:
|
||||
|
||||
- A RoboRIO running 2019v12 firmware
|
||||
- 2 [TalonSRX](https://www.ctr-electronics.com/talon-srx.html) motor controllers
|
||||
- An FRC router
|
||||
|
||||
Most importantly, the RoboRIO has [RobotPy](https://robotpy.readthedocs.io/en/stable/install/robot.html#install-robotpy) and the [CTRE Libraries](https://robotpy.readthedocs.io/en/stable/install/ctre.html) installed.
|
||||
|
||||
### SSH connection
|
||||
|
||||
To get some code running on the robot, we must first connect to it via SSH. Depending on your connection to the RoboRIO, this step may be different. Generally, the following command will work just fine to connect (assuming your computer has an [mDNS](https://en.wikipedia.org/wiki/Multicast_DNS) service):
|
||||
|
||||
```sh
|
||||
ssh admin@roborio-<team>-frc.local
|
||||
```
|
||||
|
||||
If you have issues, try one of the following addresses instead:
|
||||
|
||||
```
|
||||
roborio-<team>-FRC
|
||||
roborio-<team>-FRC.lan
|
||||
roborio-<team>-FRC.frc-field.local
|
||||
10.TE.AM.2
|
||||
172.22.11.2 # Only works on a USB connection
|
||||
```
|
||||
|
||||
If you are asked for a password, and have not set one, press <kbd>Enter</kbd> 3 times (Don't ask why.. this just works).
|
||||
|
||||
## REPL-based control
|
||||
|
||||
If you have seen my work before, you'll know that I use Python for basically everything. This project is no exception. Conveniently, the RoboRIO is a linux-based device, and can run a Python3 [REPL](https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop). This allows real-time robot programming using a REPL via SSH.
|
||||
|
||||
WPILib requires a robot class to act as a "callback" for robot actions. My idea was to build a special robot class with static methods to allow me to start it, then use the REPL to interact with some control methods (like `setSpeed` and `stop`).
|
||||
|
||||
After connecting to the robot via SSH, a Python REPL can be started by running `python3`. If there is already robot code running, it will be automatically killed in the next step.
|
||||
|
||||
With Python running, we will need 2 libraries imported. `wpilib` and `ctre`. When importing `wpilib` a message may appear to notify you that the old robot code has been stopped.
|
||||
|
||||
```python
|
||||
>>> import wpilib
|
||||
Killing previously running FRC program...
|
||||
FRC pid 1445 did not die within 0ms. Force killing with kill -9
|
||||
>>> import ctre
|
||||
```
|
||||
Keep in mind, this is a REPL. Lines that start with `>>>` or `...` are *user input*. Everything else is produced by code.
|
||||
|
||||
Next, we need to write a little code to get the robot operational. To save time, I wrote this "library" to do most of the work for me. Just save this as `rtrbt.py` somewhere, then use SCP to copy it to `/home/lvuser/rtrbt.py`.
|
||||
|
||||
```python
|
||||
# RealTime FRC Robot control helper
|
||||
# By: Evan Pratten <ewpratten>
|
||||
|
||||
# Import normal robot stuff
|
||||
import wpilib
|
||||
import ctre
|
||||
|
||||
# Handle WPI trickery
|
||||
try:
|
||||
from unittest.mock import patch
|
||||
except ImportError:
|
||||
from mock import patch
|
||||
import sys
|
||||
from threading import Thread
|
||||
|
||||
|
||||
## Internal methods ##
|
||||
_controllers = []
|
||||
_thread: Thread
|
||||
|
||||
|
||||
class _RTRobot(wpilib.TimedRobot):
|
||||
|
||||
def robotInit(self):
|
||||
|
||||
# Create motors
|
||||
_controllers.append(ctre.WPI_TalonSRX(1))
|
||||
_controllers.append(ctre.WPI_TalonSRX(2))
|
||||
|
||||
# Set safe modes
|
||||
_controllers[0].setSafetyEnabled(False)
|
||||
_controllers[1].setSafetyEnabled(False)
|
||||
|
||||
|
||||
|
||||
def _start():
|
||||
# Handle fake args
|
||||
args = ["run", "run"]
|
||||
with patch.object(sys, "argv", args):
|
||||
print(sys.argv)
|
||||
wpilib.run(_RTRobot)
|
||||
|
||||
## Utils ##
|
||||
|
||||
|
||||
def startRobot():
|
||||
""" Start the robot code """
|
||||
global _thread
|
||||
_thread = Thread(target=_start)
|
||||
_thread.start()
|
||||
|
||||
|
||||
def setMotor(id, speed):
|
||||
""" Set a motor speed """
|
||||
_controllers[id].set(speed)
|
||||
|
||||
def arcadeDrive(speed, rotation):
|
||||
""" Control the robot with arcade inputs """
|
||||
|
||||
l = speed + rotation
|
||||
r = speed - rotation
|
||||
|
||||
setMotor(0, l)
|
||||
setMotor(1, r)
|
||||
```
|
||||
|
||||
The idea is to create a simple robot program with global hooks into the motor controllers. Python's mocking tools are used to fake commandline arguments to trick robotpy into thinking this script is being run via the RIO's robotCommand.
|
||||
|
||||
Once this script has been placed on the robot, SSH back in as `lvuser` (not `admin`), and run `python3`. If using `rtrbt.py`, the imports mentioned above are handled for you. To start the robot, just run the following:
|
||||
|
||||
```python
|
||||
>>> from rtrbt import *
|
||||
>>> startRobot()
|
||||
```
|
||||
|
||||
WPILib will dump some logs into the terminal (and probably some spam) from it's own thread. Don't worry if you can't see the REPL prompt. It's probably just hidden due to the use of multiple threads in the same shell. Pressing <kbd>Enter</kbd> should show it again.
|
||||
|
||||
I added 2 functions for controlling motors. The first, `setMotor`, will set either the left (0), or right (1) motor to the specified speed. `arcadeDrive` will allow you to specify a speed and rotational force for the robot's drivetrain.
|
||||
|
||||
To kill the code and exit, press <kbd>CTRL</kbd> + <kbd>D</kbd> then <kbd>CTRL</kbd> + <kbd>C</kbd>.
|
||||
|
||||
Here is an example where I start the bot, then tell it to drive forward, then kill the left motor:
|
||||
```python
|
||||
Python 3.6.8 (default, Oct 7 2019, 12:59:55)
|
||||
[GCC 8.3.0] on linux
|
||||
Type "help", "copyright", "credits" or "license" for more information.
|
||||
>>> from rtrbt import *
|
||||
>>> startRobot()
|
||||
['run', 'run']
|
||||
17:53:46:472 INFO : wpilib : WPILib version 2019.2.3
|
||||
17:53:46:473 INFO : wpilib : HAL base version 2019.2.3;
|
||||
17:53:46:473 INFO : wpilib : robotpy-ctre version 2019.3.2
|
||||
17:53:46:473 INFO : wpilib : robotpy-cscore version 2019.1.0
|
||||
17:53:46:473 INFO : faulthandler : registered SIGUSR2 for PID 5447
|
||||
17:53:46:474 INFO : nt : NetworkTables initialized in server mode
|
||||
17:53:46:497 INFO : robot : Default IterativeRobot.disabledInit() method... Override me!
|
||||
17:53:46:498 INFO : robot : Default IterativeRobot.disabledPeriodic() method... Override me!
|
||||
17:53:46:498 INFO : robot : Default IterativeRobot.robotPeriodic() method... Override me!
|
||||
>>>
|
||||
>>> arcadeDrive(1.0, 0.0)
|
||||
>>> setMotor(0, 0.0)
|
||||
>>>
|
||||
^C
|
||||
Exception ignored in: <module 'threading' from '/usr/lib/python3.6/threading.py'>
|
||||
Traceback (most recent call last):
|
||||
File "/usr/lib/python3.6/threading.py", line 1294, in _shutdown
|
||||
t.join()
|
||||
File "/usr/lib/python3.6/threading.py", line 1056, in join
|
||||
self._wait_for_tstate_lock()
|
||||
File "/usr/lib/python3.6/threading.py", line 1072, in _wait_for_tstate_lock
|
||||
elif lock.acquire(block, timeout):
|
||||
KeyboardInterrupt
|
||||
```
|
||||
|
||||
The message at the end occurs when killing the code.
|
||||
|
||||
## Conclusion
|
||||
|
||||
I have no idea why any of this would be useful, or if it is even field legal.. It's just a fun project for a monday morning.
|
58
src/collections/_posts/2019-12-11-cron.md
Normal file
58
src/collections/_posts/2019-12-11-cron.md
Normal file
@ -0,0 +1,58 @@
|
||||
---
|
||||
layout: default
|
||||
title: I used cron for the first time
|
||||
description: And I didn't die
|
||||
date: 2019-12-11
|
||||
tags: random
|
||||
redirect_from:
|
||||
- /post/cd9dj84kf0/
|
||||
- /cd9dj84kf0/
|
||||
aliases:
|
||||
- /blog/2019/12/11/cron
|
||||
- /blog/cron
|
||||
---
|
||||
|
||||
[Cron](https://en.wikipedia.org/wiki/Cron) has always been one of those "scary sysadmin things" in my head. But today, I finally used it!
|
||||
|
||||
## My need
|
||||
I have access to a private API that happens to clear it's users if they are inactive for too long. To solve this, I decided to add a small cron job to make an API call once per month. Basically a [keepalive](https://en.wikipedia.org/wiki/Keepalive).
|
||||
|
||||
## How I set it up
|
||||
|
||||
Adding a cron job to my laptop was very easy. First, I made a bash script for my api call (not needed, but I felt like doing it).
|
||||
|
||||
```sh
|
||||
#! /bin/bash
|
||||
curl --include --header "Accept: application/xml" '<API Endpoint Here>' --user $1:$2
|
||||
```
|
||||
|
||||
Then, by running `crontab -e` in my terminal, I just added a new line at the bottom of the file, discribing the task, and when it should be run.
|
||||
```cron
|
||||
# Edit this file to introduce tasks to be run by cron.
|
||||
#
|
||||
# Each task to run has to be defined through a single line
|
||||
# indicating with different fields when the task will be run
|
||||
# and what command to run for the task
|
||||
#
|
||||
# To define the time you can provide concrete values for
|
||||
# minute (m), hour (h), day of month (dom), month (mon),
|
||||
# and day of week (dow) or use '*' in these fields (for 'any').#
|
||||
# Notice that tasks will be started based on the cron's system
|
||||
# daemon's notion of time and timezones.
|
||||
#
|
||||
# Output of the crontab jobs (including errors) is sent through
|
||||
# email to the user the crontab file belongs to (unless redirected).
|
||||
#
|
||||
# For example, you can run a backup of all your user accounts
|
||||
# at 5 a.m every week with:
|
||||
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
|
||||
#
|
||||
# For more information see the manual pages of crontab(5) and cron(8)
|
||||
#
|
||||
# m h dom mon dow command
|
||||
00 11 1 * * /usr/local/bin/api-keepalive.sh <username> <password>
|
||||
```
|
||||
|
||||
This will run once per month, on the 1st, at 11:00.
|
||||
|
||||
That's it! Stupidly simple, and I am no longer scared of cron
|
58
src/collections/_posts/2020-01-20-brainfuckinbash.md
Normal file
58
src/collections/_posts/2020-01-20-brainfuckinbash.md
Normal file
@ -0,0 +1,58 @@
|
||||
---
|
||||
layout: default
|
||||
title: Compiling BrainFuck with a shell script
|
||||
description: That was easy
|
||||
date: 2020-01-20
|
||||
tags:
|
||||
- random
|
||||
- bash
|
||||
redirect_from:
|
||||
- /post/es3v140d/
|
||||
- /es3v140d/
|
||||
aliases:
|
||||
- /blog/2020/01/20/brainfuckinbash
|
||||
- /blog/brainfuckinbash
|
||||
---
|
||||
|
||||
[BrainFuck](https://en.wikipedia.org/wiki/Brainfuck) is an [esoteric programming language](https://en.wikipedia.org/wiki/Esoteric_programming_language) that is surprisingly easy to implement. It is almost on the same level as "Hello, world!", but for compilers and interpreters. In this post, ill share my new little BrainFuck compiler I built with a bash script.
|
||||
|
||||
## The BrainFuck instruction set
|
||||
|
||||
BrainFuck has 8 simple instructions:
|
||||
|
||||
| Instruction | Operation |
|
||||
|-------------|---------------------------------------------------------|
|
||||
| `>` | increment data pointer |
|
||||
| `<` | decrement data pointer |
|
||||
| `+` | increment the byte at the data pointer |
|
||||
| `-` | decrement the byte at the data pointer |
|
||||
| `.` | print the current byte to stdout |
|
||||
| `,` | read one byte from stdin to the current byte |
|
||||
| `[` | jump to the matching `]` if the current byte is 0 |
|
||||
| `]` | jump to the matching `[` if the current byte is nonzero |
|
||||
|
||||
### The C equivalent
|
||||
|
||||
BrainFuck works on a "tape". This is essentially a massive array, with a pointer that moves around. Luckily, this can be implemented with a tiny bit of C. (Thanks [wikipedia](https://en.wikipedia.org/wiki/Brainfuck#Commands))
|
||||
|
||||
| BF | C code |
|
||||
|-----|-------------------|
|
||||
| `>` | `++ptr;` |
|
||||
| `<` | `--ptr;` |
|
||||
| `+` | `++*ptr;` |
|
||||
| `-` | `--*ptr;` |
|
||||
| `.` | `putchar(*ptr);` |
|
||||
| `,` | `*ptr=getchar();` |
|
||||
| `[` | `while (*ptr) {` |
|
||||
| `]` | `}` |
|
||||
|
||||
## Implementation
|
||||
|
||||
Due to the fact BF has a direct conversion to C, I figured: *"Why not just use [sed](https://www.gnu.org/software/sed/manual/sed.html) to make a BF compiler?"*. And so I did.
|
||||
|
||||
The script is available at [git.io/JvIHm](https://git.io/JvIHm), and works as follows:
|
||||
|
||||
1. Create a file containing a "header" that contains some C code that imports `stdio.h` and creates a char array
|
||||
2. Use SED to replace all BF instructions with the matching C code
|
||||
3. Append a file footer with code to return the current value at the program pointer
|
||||
4. Compile this c file with [GCC](https://gcc.gnu.org/)
|
126
src/collections/_posts/2020-04-20-ludumdare46.md
Normal file
126
src/collections/_posts/2020-04-20-ludumdare46.md
Normal file
@ -0,0 +1,126 @@
|
||||
---
|
||||
layout: default
|
||||
title: 'Ludum Dare 46: Jamming with friends'
|
||||
description: Recapping the development of *Micromanaged Mike*
|
||||
date: 2020-04-20
|
||||
tags:
|
||||
- gamejam
|
||||
- ldjam
|
||||
- javascript
|
||||
- project
|
||||
redirect_from:
|
||||
- /post/ebsdjtd9/
|
||||
- /ebsdjtd9/
|
||||
aliases:
|
||||
- /blog/2020/04/20/ludumdare46
|
||||
- /blog/ludumdare46
|
||||
extra:
|
||||
js_import:
|
||||
- https://platform.twitter.com/widgets.js
|
||||
excerpt: A look back at the development of Micromanaged Mike
|
||||
---
|
||||
|
||||
Over the past weekend I teamed up with @rsninja722, @wm-c, @exvacuum, @marshmarlow, and our friends Sally and Matt to participate in the [LudumDare46](https://ldjam.com/events/ludum-dare/46) game jam. This post will outline the game development process.
|
||||
|
||||
## Day 0
|
||||
|
||||
----
|
||||
|
||||
Starting at 20:30 Friday night, we all anxiously awaited this jam's theme to be released.
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">The theme for Ludum Dare 46 is...<br><br>Keep it alive<a href="https://t.co/APmeEhwjEp">https://t.co/APmeEhwjEp</a> <a href="https://twitter.com/hashtag/LDJAM?src=hash&ref_src=twsrc%5Etfw">#LDJAM</a> <a href="https://t.co/bzNYi2zlDG">pic.twitter.com/bzNYi2zlDG</a></p>— Ludum Dare (@ludumdare) <a href="https://twitter.com/ludumdare/status/1251314489934446593?ref_src=twsrc%5Etfw">April 18, 2020</a></blockquote>
|
||||
|
||||
..and so we started.
|
||||
|
||||
Day 0 was spend on three tasks:
|
||||
- Deciding the story for our game
|
||||
- Allocating tasks
|
||||
- Building a software framework for the game
|
||||
|
||||
We decided to program our game in JavaScript (but not without an argument about types) because that is @rsninja722's primary language, and we can use his JS game engine, [game.js](https://github.com/rsninja722/game.js). On top of that, we also decided to use [SASS](https://sass-lang.com/) for styling, and I designed [a CSS injector](https://github.com/rsninja722/LudumDare46/blob/master/docs/assets/js/injection/cssinjector.js) that allows us to share variables between JS and SASS.
|
||||
|
||||
After task allocation, I took on the job of handling sounds and sound loading for the game. I decided to start work on that during day 1, due to homework.
|
||||
|
||||
*The game's progress at the end of Day 0 can be found at commit [0b4a1cd](https://github.com/rsninja722/LudumDare46/tree/0b4a1cdb92e62ff0f9453f6f169f641dd82e8f09)*
|
||||
|
||||
|
||||
## Day 1
|
||||
|
||||
----
|
||||
|
||||
Day 1 started with @exvacuum developing a heartrate monitor system for the game:
|
||||
|
||||

|
||||
|
||||
*Demo image showing off his algorithm*
|
||||
|
||||
His progress was documented [on his YouTube channel](https://www.youtube.com/watch?v=oqcbO8x0evY).
|
||||
|
||||
I also started out by writing a sound system that uses audio channels to separate sounds. This system pre-caches all sounds while the game loads. Unfortunately, after getting my branch merged into master, I noticed a few bugs:
|
||||
- When queueing audio, the 2 most recent requests are always ignored
|
||||
- Some browsers do not allow me to play multiple audio streams at the same time
|
||||
|
||||
Due to these issues, I decided to rewrite the audio backend to use [Howler.js](https://howlerjs.com/). I streamed this rewrite [on Twitch](https://www.twitch.tv/videos/595864066). The Howler rewrite was very painless, and made a much nicer interface for playing audio assets.
|
||||
|
||||
```javascript
|
||||
// The old way
|
||||
globalSoundContext.playSound(globalSoundContext.channels.bgm, soundAssets.debug_ding);
|
||||
|
||||
// The new way
|
||||
soundAssets.debug_ding.play();
|
||||
```
|
||||
|
||||
This rewrite also added integration with the volume control sliders in the game settings menu:
|
||||
|
||||

|
||||
|
||||
*Audio Settings screen*
|
||||
|
||||
Later on in the day, a basic HUD was designed to incorporate the game elements. A bug was also discovered that causes Firefox-based clients to not render the background fill. We decided to replace the background fill with an image later.
|
||||
|
||||

|
||||
|
||||
*V1 of the game HUD*
|
||||
|
||||
While developing the sound backend, and tweaking UI, I added sound assets for heartbeats, and footsteps. World assets were also added, and the walking system was improved.
|
||||
|
||||

|
||||
|
||||
*The game with basic world assets loaded*
|
||||
|
||||
@wm-c and @rsninja722 also spent time developing the game's tutorial mode.
|
||||
|
||||
*The game's progress at the end of Day 1 can be found at commit [84d8438](https://github.com/rsninja722/LudumDare46/tree/84d843880f052fd274d2d14036220e6b591e9ec3)*
|
||||
|
||||
## Day 2 & 3
|
||||
|
||||
----
|
||||
|
||||
|
||||
Day 2 started with a new background asset, and a new HUD design:
|
||||
|
||||

|
||||
|
||||
*The game's new background*
|
||||
|
||||

|
||||
|
||||
*The game's new HUD*
|
||||
|
||||
@rsninja722 also got to work on updating the game's collisions based on the new assets, while I added more sounds to the game (again, streaming this process [on Twitch](https://www.twitch.tv/videos/596589171)).
|
||||
|
||||
From then on, development time was just spent tweaking things such as:
|
||||
- A Chrome sound bug
|
||||
- A transition bug when moving from the loading screen to tutorial
|
||||
- Some collision bugs
|
||||
- Adding a new credits screen
|
||||
|
||||
*The game's progress at the end of Day 2 can be found at commit [b9d758f](https://github.com/rsninja722/LudumDare46/tree/b9d758f4172f2ca251da6f60af713888ef28b5fe)*
|
||||
|
||||
## The Game
|
||||
|
||||
Micromanaged Mike is free to play on [@rsninj722's website](https://rsninja.dev/LudumDare46/).
|
||||
|
||||

|
||||
|
||||
*Final game screenshot*
|
203
src/collections/_posts/2020-05-19-running-roborio-native.md
Normal file
203
src/collections/_posts/2020-05-19-running-roborio-native.md
Normal file
@ -0,0 +1,203 @@
|
||||
---
|
||||
layout: default
|
||||
title: Running RoboRIO firmware inside Docker
|
||||
description: Containerized native ARMv7l emulation in 20 minutes
|
||||
date: 2020-05-19
|
||||
tags: frc roborio emulation
|
||||
redirect_from:
|
||||
- /post/5d3nd9s4/
|
||||
- /5d3nd9s4/
|
||||
aliases:
|
||||
- /blog/2020/05/19/running-roborio-native
|
||||
- /blog/running-roborio-native
|
||||
extra:
|
||||
excerpt: This post covers how to run a RoboRIO's operating system in Docker
|
||||
---
|
||||
|
||||
It has now been 11 weeks since the last time I have had access to a [RoboRIO](https://www.ni.com/en-ca/support/model.roborio.html) to use for debugging code, and there are limits to my simulation software. So, I really only have one choice: *emulate my entire robot*.
|
||||
|
||||
My goal is to eventually have every bit of hardware on [5024](https://www.thebluealliance.com/team/5024)'s [Darth Raider](https://cs.5024.ca/webdocs/docs/robots/darthRaider) emulated, and running on my docker swarm. Conveniently, everything uses (mostly) the same CPU architecture. In this post, I will go over how to build a RoboRIO docker container.
|
||||
|
||||
## Host system requirements
|
||||
|
||||
This process requires a host computer with:
|
||||
- An x86_64 CPU
|
||||
- A decent amount of RAM
|
||||
- [Ubuntu 18.04](https://mirrors.lug.mtu.edu/ubuntu-releases/18.04/) or later
|
||||
- [Docker CE](https://docs.docker.com/engine/install/debian/) installed
|
||||
- [docker-compose](https://docs.docker.com/compose/install/) installed
|
||||
|
||||
## Getting a system image
|
||||
|
||||
This is the hardest step. To get a RoboRIO docker container running, you will need:
|
||||
- A copy of the latest RoboRIO firmware package
|
||||
- A copy of `libfakearmv7l.so` ([download](https://github.com/robotpy/fakearmv7l/releases/download/v1/libfakearmv7l.so))
|
||||
|
||||
### RoboRIO Firmware
|
||||
|
||||
To acquire a copy of the latest RoboRIO Firmware package, you will need to install the [FRC Game Tools](https://www.ni.com/en-ca/support/downloads/drivers/download.frc-game-tools.html) on a **Windows** machine (not wine).
|
||||
|
||||
After installing the toolsuite, and activating it with your FRC team's activation key (provided in Kit of Parts), you can grab the latest `FRC_roboRIO_XXXX_vXX.zip` file from the installation directory of the *FRC Imaging Tool* (This will vary depending on how, and where the Game Tools are installed).
|
||||
|
||||
After unzipping this file, you will find another ZIP file, and a LabVIEW FPGA file. Unzip the ZIP, and look for a file called `systemimage.tar.gz`. This is the RoboRIO system image. Copy it to your Ubuntu computer.
|
||||
|
||||
## Bootstrapping
|
||||
|
||||
The bootstrap process is made up of a few parts:
|
||||
|
||||
1. Enabling support for ARM-based docker containers
|
||||
2. Converting the RoboRIO system image to a Docker base image
|
||||
3. Building a Dockerfile with hacked auth
|
||||
|
||||
### Enabling Docker-ARM support
|
||||
|
||||
Since the RoboRIO system image and libraries are compiled to run on ARMv7l hardware, they will refuse to run on an x86_64 system. This is where [QEMU](https://www.qemu.org/) comes in to play. We can use QEMU as an emulation layer between out docker containers and our CPU. To get QEMU set up, we must first install support for ARM->x86 emulation by running:
|
||||
|
||||
```sh
|
||||
sudo apt install qemu binfmt-support qemu-user-static -y
|
||||
```
|
||||
|
||||
Once QEMU has been installed, we must run the registration scripts with:
|
||||
|
||||
```sh
|
||||
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
|
||||
```
|
||||
|
||||
### Converting the system image to a Docker base
|
||||
|
||||
We have a system image filesystem, but need Docker to view it as a Docker image.
|
||||
|
||||
#### Using my pre-built image
|
||||
|
||||
Feel free to skip the following step, and just use my [pre-built](https://hub.docker.com/r/ewpratten/roborio) RoboRIO base image. It is already set up with hacked auth, and is (at the time of writing) based on firmware version `2020_v10`.
|
||||
|
||||
To use it, replace `roborio:latest` with `ewpratten/roborio:2020_v10` in the `docker-compose.yml` config below.
|
||||
|
||||
#### Building your own image
|
||||
|
||||
Make a folder, and put both the system image, and `libfakearmv7l.so` files in it. This will be your "working directory". Now, import the system image into docker with:
|
||||
|
||||
```sh
|
||||
docker import ./systemimage.tar.gz roborio:tmp
|
||||
```
|
||||
|
||||
This will build a docker base image out of the system image, and name it `roborio:tmp`. You can use this on it's own, but if you want to deploy code to the container with [GradleRIO](https://github.com/wpilibsuite/GradleRIO), or SSH into the container, you will need to strip the NI Auth.
|
||||
|
||||
### Stripping National Instruments Auth
|
||||
|
||||
By default, the RoboRIO system image comes fairly locked down. To fix this, we can "extend" our imported docker image with some configuration to allow us to remove some unknown passwords.
|
||||
|
||||
In the working directory, we must first create a file called `common_auth`. This will store our modified authentication configuration. Add the following to the file:
|
||||
|
||||
```
|
||||
#
|
||||
# /etc/pam.d/common-auth - authentication settings common to all services
|
||||
#
|
||||
# This file is included from other service-specific PAM config files,
|
||||
# and should contain a list of the authentication modules that define
|
||||
# the central authentication scheme for use on the system
|
||||
# (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the
|
||||
# traditional Unix authentication mechanisms.
|
||||
|
||||
# ~~~ This file is modified for use with Docker ~~~
|
||||
|
||||
# here are the per-package modules (the "Primary" block)
|
||||
# auth [success=2 auth_err=1 default=ignore] pam_niauth.so nullok
|
||||
# auth [success=1 default=ignore] pam_unix.so nullok
|
||||
# here's the fallback if no module succeeds
|
||||
# auth requisite pam_deny.so
|
||||
# prime the stack with a positive return value if there isn't one already;
|
||||
# this avoids us returning an error just because nothing sets a success code
|
||||
# since the modules above will each just jump around
|
||||
auth required pam_permit.so
|
||||
# and here are more per-package modules (the "Additional" block)
|
||||
|
||||
```
|
||||
|
||||
Now, we must create a `Dockerfile` in the same directory with the following contents:
|
||||
|
||||
```
|
||||
FROM roborio:tmp
|
||||
|
||||
# Fixes issues with the original RoboRIO image
|
||||
RUN mkdir -p /var/volatile/tmp && \
|
||||
mkdir -p /var/volatile/cache && \
|
||||
mkdir -p /var/volatile/log && \
|
||||
mkdir -p /var/run/sshd
|
||||
|
||||
RUN opkg update && \
|
||||
opkg install binutils-symlinks gcc-symlinks g++-symlinks libgcc-s-dev make libstdc++-dev
|
||||
|
||||
# Overwrite auth
|
||||
COPY system/common_auth /etc/pam.d/common-auth
|
||||
RUN useradd admin -ou 0 -g 0 -s /bin/bash -m
|
||||
RUN usermod -aG sudo admin
|
||||
|
||||
# Fixes for WPILib
|
||||
RUN mkdir -p /usr/local/frc/third-party/lib
|
||||
RUN chmod 777 /usr/local/frc/third-party/lib
|
||||
|
||||
# This forces uname to report armv7l
|
||||
COPY system/libfakearmv7l.so /usr/local/lib/libfakearmv7l.so
|
||||
RUN chmod +x /usr/local/lib/libfakearmv7l.so && \
|
||||
mkdir -p /home/admin/.ssh && \
|
||||
echo "LD_PRELOAD=/usr/local/lib/libfakearmv7l.so" >> /home/admin/.ssh/environment && \
|
||||
echo "PermitUserEnvironment yes" >> /etc/ssh/sshd_config && \
|
||||
echo "PasswordAuthentication no">> /etc/ssh/sshd_config
|
||||
|
||||
# Put the CPU into 32bit mode, and start an SSH server
|
||||
ENTRYPOINT ["setarch", "linux32", "&&", "/usr/sbin/sshd", "-D" ]
|
||||
```
|
||||
|
||||
This file will cause the container to:
|
||||
- Install needed tools
|
||||
- Configure an "admin" user with full permissions
|
||||
- Set r/w permissions for all FRC libraries
|
||||
- Overwrite the system architecture with a custom string to allow programs like `pip` to run properly
|
||||
- Enables password-less SSH login
|
||||
- Sets the CPU to 32bit mode
|
||||
|
||||
We can now build the final image with these commands:
|
||||
|
||||
```sh
|
||||
docker build -f ./Dockerfile -t roborio:local .
|
||||
docker rmi roborio:tmp
|
||||
docker tag roborio:local roborio:latest
|
||||
```
|
||||
|
||||
## Running the RoboRIO container locally
|
||||
|
||||
We can now use `docker-compose` to start a fake robot network locally, and run our RoboRIO container. First, we need to make a `docker-compose.yml` file. In this file, add:
|
||||
|
||||
```yml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
|
||||
roborio:
|
||||
image: roborio:latest # Change this to "ewpratten/roborio:2020_v10" if using my pre-built image
|
||||
networks:
|
||||
robo_net:
|
||||
ipv4_address: 10.50.24.2
|
||||
|
||||
networks:
|
||||
robo_net:
|
||||
ipam:
|
||||
driver: default
|
||||
config:
|
||||
- subnet: 10.50.24.0/24
|
||||
```
|
||||
|
||||
We can now start the RoboRIO container by running
|
||||
|
||||
```sh
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
You should now be able to SSH into the RoboRIO container with:
|
||||
|
||||
```sh
|
||||
ssh admin@10.50.24.2
|
||||
```
|
||||
|
||||
Or even deploy code to the container! (Just make sure to set your FRC team number to `5024`)
|
104
src/collections/_posts/2020-08-03-joystick-to-voltage.md
Normal file
104
src/collections/_posts/2020-08-03-joystick-to-voltage.md
Normal file
@ -0,0 +1,104 @@
|
||||
---
|
||||
layout: default
|
||||
title: 'Notes from FRC: Converting joystick data to tank-drive outputs'
|
||||
description: and making a tank-based robot's movements look natural
|
||||
date: 2020-08-03
|
||||
enable_katex: true
|
||||
---
|
||||
|
||||
I am starting a new little series here called "Notes from FRC". The idea is that I am going to write about what I have learned over the past three years of working (almost daily) with robots, and hopefully someone in the future will find them useful. The production source code I based this post around is available [here](https://github.com/frc5024/lib5k/blob/cd8ad407146b514cf857c1d8ac82ac8f3067812b/common_drive/src/main/java/io/github/frc5024/common_drive/calculation/DifferentialDriveCalculation.java).
|
||||
|
||||
Today's topic is quite simple, yet almost nobody has written anything about it. One of the very first problems presented to you when working with an FRC robot is: *"I have a robot, and I have a controller.. How do I make this thing move?"*. When I first started as a software developer at *Raider Robotics*, I decided to do some Googling, as I was sure someone would have at least written about this from the video-game industry.. Nope.
|
||||
|
||||
Let's lay out the problem. We have an application that needs to run some motors from a joystick input. Periodically, we are fed a vector of joystick data, $\lbrack\begin{smallmatrix}T \\ S\end{smallmatrix}\rbrack$, where the values follow $-1\leq \lbrack\begin{smallmatrix}T \\ S\end{smallmatrix}\rbrack \leq 1$. $T$ denotes our *throttle* input, and $S$ denotes something we at Raider Robotics call *"rotation"*. As you will see later on, rotation is not quite the correct word, but none of us can come up with anything better. Some teams, who use a steering wheel as input instead of a joystick, call this number *wheel*, which makes sense in their context. For every time an input is received, we must also produce an output, $\lbrack\begin{smallmatrix}L \\ R\end{smallmatrix}\rbrack$, where the values follow $-12\leq \lbrack\begin{smallmatrix}L \\ R\end{smallmatrix}\rbrack \leq 12$. $\lbrack\begin{smallmatrix}L \\ R\end{smallmatrix}\rbrack$ is a vector containing *left* and *right* side motor output voltages respectively. Since we build [tank-drive](https://en.wikipedia.org/wiki/Tank_steering_systems)-style robots, when $\lbrack\begin{smallmatrix}L \\ R\end{smallmatrix}\rbrack = \lbrack\begin{smallmatrix}12 \\ 12\end{smallmatrix}\rbrack$, the robot would be moving forward at full speed, and when $\lbrack\begin{smallmatrix}L \\ R\end{smallmatrix}\rbrack = \lbrack\begin{smallmatrix}12 \\ 0\end{smallmatrix}\rbrack$, the robot would be pivoting right around the centre of its right track at full speed. The simplest way to convert a throttle and rotation input to left and right voltages is as follows:
|
||||
|
||||
$$
|
||||
output = 12\cdot\begin{bmatrix}T + S \\ T - S\end{bmatrix}
|
||||
$$
|
||||
|
||||
This can be expressed in Python as:
|
||||
|
||||
```python
|
||||
def computeMotorOutputs(T: float, S: float) -> Tuple[float, float]:
|
||||
return (12 * (T + S), 12 * (T - S))
|
||||
```
|
||||
|
||||
In FRC, we call this method "arcade drive", since the controls feel like you are driving a tank in an arcade game. Although this is very simple, there is a big drawback. At high values of $T$ and $S$, precision is lost. The best solution I have seen to this problem is to divide both $L$ and $R$ by the result of $\max(abs(T), abs(S))$ if the resulting value is greater than $1.0$. With this addition, the compute function now looks like this:
|
||||
|
||||
```python
|
||||
def computeMotorOutputs(T: float, S: float) -> Tuple[float, float]:
|
||||
# Calculate normal arcade values
|
||||
L = 12 * (T + S)
|
||||
R = 12 * (T - S)
|
||||
|
||||
# Determine maximum output
|
||||
m = max(abs(T), abs(S))
|
||||
|
||||
# Scale if needed
|
||||
if m > 1.0:
|
||||
L /= m
|
||||
R /= m
|
||||
|
||||
return (L, R)
|
||||
```
|
||||
|
||||
Perfect. Now we have solved the problem!
|
||||
|
||||
Of course, I'm not stopping here. Although arcade drive works, the result is not great. Small movements are very hard to get right, as a small movement on your controller will translate to a fairly large one on the robot (on an Xbox controller, we are fitting the entire range of 0m/s to 5m/s in about half an inch of joystick movement). This is generally tolerable when moving forward and turning, but when sitting still, it is near impossible to make precise rotational movements. Also, unless you have a lot of practice driving tank-drive vehicles, sharp turns are a big problem, as overshooting and skidding are very common. Wouldn't it be nice if we could have a robot that manuevers in graceful curves like a car? This is where the second method of joystick-to-voltage conversion comes in to play.
|
||||
|
||||
FRC teams like [254](https://www.team254.com/) and [971](https://frc971.org/) use variations of this calculation method called *"constant curvature drive"*. Curvature drive is only slightly different from arcade drive. Here is the new formula:
|
||||
|
||||
$$
|
||||
output = 12\cdot\begin{bmatrix}T + abs(T) \cdot S \\ T - abs(T) \cdot S\end{bmatrix}
|
||||
$$
|
||||
|
||||
If we also add the speed scaling from arcade drive, we are left with the following Python code:
|
||||
|
||||
```python
|
||||
def computeMotorOutputs(T: float, S: float) -> Tuple[float, float]:
|
||||
# Calculate normal curvature values
|
||||
L = 12 * (T + abs(T) * S)
|
||||
R = 12 * (T - abs(T) * S)
|
||||
|
||||
# Determine maximum output
|
||||
m = max(abs(T), abs(S))
|
||||
|
||||
# Scale if needed
|
||||
if m > 1.0:
|
||||
L /= m
|
||||
R /= m
|
||||
|
||||
return (L, R)
|
||||
```
|
||||
|
||||
The $S$ component now changes the curvature of the robot's path, rather than the heading's rate of change. This makes the robot much more controllable at high speeds. There is one downside to this method though. As a tradeoff to making high-speed driving much more controllable, we have completely removed the robot's ability to turn when stopped.
|
||||
|
||||
This is where the final drive method comes in to play. At Raider Robotics, we call it *"semi-constant curvature drive"*, and have been using it in gameplay with great success since 2019. Since we want to take the best parts of arcade drive and constant curvature drive, we came to the simple conclusion that we should just average the two methods. Doing this results in this new formula:
|
||||
|
||||
$$
|
||||
output = 12\cdot\begin{bmatrix}\frac{(T + abs(T) * S) + (T + S)}{2} \\ \frac{(T - abs(T) * S) + (T - S)}{2}\end{bmatrix}
|
||||
$$
|
||||
|
||||
And here is the associated Python code:
|
||||
|
||||
|
||||
```python
|
||||
def computeMotorOutputs(T: float, S: float) -> Tuple[float, float]:
|
||||
# Calculate semi-constant curvature values
|
||||
L = 12 * (((T + abs(T) * S) + (T + S)) / 2)
|
||||
R = 12 * (((T - abs(T) * S) + (T - S)) / 2)
|
||||
|
||||
# Determine maximum output
|
||||
m = max(abs(T), abs(S))
|
||||
|
||||
# Scale if needed
|
||||
if m > 1.0:
|
||||
L /= m
|
||||
R /= m
|
||||
|
||||
return (L, R)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
I hope someone will some day find this post helpful. I am working on a few more FRC-related posts about more advanced topics, and things I have learned through my adventures at Raider Robotics. If you would like to check out the code that powers all of this, take a look at our core software library: [Lib5K](https://github.com/frc5024/lib5k)
|
89
src/collections/_posts/2020-08-13-drivetrain-navigation.md
Normal file
89
src/collections/_posts/2020-08-13-drivetrain-navigation.md
Normal file
@ -0,0 +1,89 @@
|
||||
---
|
||||
layout: default
|
||||
title: 'Notes from FRC: Autonomous point-to-point navigation'
|
||||
description: The tale of some very curvy math
|
||||
date: 2020-08-13
|
||||
enable_katex: true
|
||||
---
|
||||
|
||||
This post is a continuation on my "Notes from FRC" series. If you haven't already, I recommend reading my post on [Converting joystick data to tank-drive outputs](@/blog/2020-08-03-Joystick-to-Voltage.md). Some concepts in this post were introduced there. Like last time, to see the production code behind this post, check [here](https://github.com/frc5024/lib5k/blob/ab90994b2a0c769abfdde9a834133725c3ce3a38/common_drive/src/main/java/io/github/frc5024/common_drive/DriveTrainBase.java) and [here](https://github.com/frc5024/lib5k/tree/master/purepursuit/src/main/java/io/github/frc5024/purepursuit/pathgen).
|
||||
|
||||
At *Raider Robotics*, most of my work has been spent on these three subjects:
|
||||
- Productivity infrastructure
|
||||
- Developing our low-level library
|
||||
- Writing the software that powers our past three robots' *DriveTrain*s
|
||||
|
||||
When I joined the team, we had just started to design effective autonomous locomotion code. Although functional, our ability to manuever robots around the FRC field autonomously was very limited, and with very low precision. It has since been my goal to build a powerful software framework for precisely estimating our robot's real-world position at all times, and for giving anyone the tools to easily call a method, and have the robot to drive from point *A* to *B*. My goal with this post is to outline how this system actually works. But first, I need to explain some core concepts:
|
||||
|
||||
**Poses**. At Raider Robotics, we use the following vector components to denote a robot's position and rotation on a 2D plane (the floor). We call this magic vector a *pose* :
|
||||
|
||||
$$
|
||||
pose = \begin{bmatrix} x \\ y \\ \theta \end{bmatrix}
|
||||
$$
|
||||
|
||||
With a robot sitting at $\big[\begin{smallmatrix}0 \\ 0 \\ 0\end{smallmatrix}\big]$, it would be facing positive in the $x$ axis.
|
||||
|
||||
**Localization**. When navigating the real world, the first challenge is: knowing where the robot is. At Raider Robotics, we use an [Unscented Kalman Filter](https://en.wikipedia.org/wiki/Kalman_filter#Unscented_Kalman_filter) (UKF) that fuses high-accuracy encoder and gyroscope data with medium-accuracy VI-SLAM data fed off our robot's computer vision system. Our encoders are attached to the robot's tank track motor output shafts, counting the distance traveled by each track. Although this sounds extremely complicated, this algorithm can be boiled down to a simple (and low-accuracy) equation that originated from marine navigation called [Dead Reckoning](https://en.wikipedia.org/wiki/Dead_reckoning):
|
||||
|
||||
$$
|
||||
\Delta P = \begin{bmatrix}(\Delta L - \Delta R) \cdot \sin(\theta\cdot\frac{\pi}{180}) \\ (\Delta L - \Delta R) \cdot \cos(\theta\cdot\frac{\pi}{180}) \\ \Delta \theta \end{bmatrix}
|
||||
$$
|
||||
|
||||
The result of this equation, $\Delta P$, is then accumulated over time, into the robot's *pose*. $L$ and $R$ are the distance readings from the *left* and *right* tank tracks.
|
||||
|
||||
With an understanding of the core concepts, lets say we have a tank-drive robot sitting at pose $A$, and we want to get it to pose $B$.
|
||||
|
||||
$$
|
||||
A = \begin{bmatrix}0 \\ 0 \\ 0\end{bmatrix}
|
||||
$$
|
||||
|
||||
$$
|
||||
B = \begin{bmatrix}0 \\ 1 \\ 90\end{bmatrix}
|
||||
$$
|
||||
|
||||
This raises an interesting problem. Our *goal pose* is directly to the left of our *current pose*, and tanks cannot strafe (travel in the $y$ axis without turning). Luckily, to solve this problem we just need to know our error from the goal pose as a distance ($\Delta d$), and a heading ($\Delta\theta$):
|
||||
|
||||
$$
|
||||
\Delta d = \sqrt{\Delta x^2 + \Delta y^2}
|
||||
$$
|
||||
|
||||
$$
|
||||
\Delta\theta = \arctan(\Delta y, \Delta x) \cdot \frac{180}{\pi}
|
||||
$$
|
||||
|
||||
Notice how a polar coordinate containing these values: $\big[\begin{smallmatrix}\Delta d \\ \Delta\theta\end{smallmatrix}\big]$ is very similar to our joystick input vector from the [previous post](@/blog/2020-08-03-Joystick-to-Voltage.md): $\big[\begin{smallmatrix}T \\ S\end{smallmatrix}\big]$. Converting our positional error into a polar coordinate makes the process of navigating to any point very simple. All we need to do is take the [Hadamard product](https://en.wikipedia.org/wiki/Hadamard_product_(matrices)) of the coordinate matrix with a gain matrix to make small adjustments to the output based on the physical characteristics of your robot, like the amount of voltage required to overcome static friction. This is a very simple P-gain controller.
|
||||
|
||||
$$
|
||||
input = \begin{bmatrix}\Delta d \\ \Delta\theta\end{bmatrix}\circ\begin{bmatrix}K_t \\ K_s \end{bmatrix}
|
||||
$$
|
||||
|
||||
This new input vector can now be fed directly into the code from the previous post, and as long as the $K_t$ and $K_s$ gains are tuned correctly, your robot will smoothly and efficiently navigate from pose $A$ to pose $B$ automatically.
|
||||
|
||||
There are a few tweaks that can be made to this method that will further smooth out the robot's movement. Firstly, we can multiply $\Delta d$ by a restricted version of $\Delta\theta$. This will cause the robot to slow down any time it is too far off course. While it is slower, turns can be made faster, and more efficiently. This cuts down on the amount of time needed to face the goal pose in the first place. We can calculate this gain, $m$, as:
|
||||
|
||||
$$
|
||||
m = \big(-1 * \frac{\min(abs(\Delta\theta), 90)}{90}\big) + 1
|
||||
$$
|
||||
|
||||
$m$ is now a scalar that falls in $-1 \leq m \leq 1$. Our calculation to determine a new "input" vector is now as follows:
|
||||
|
||||
$$
|
||||
input = \begin{bmatrix}\Delta d \\ \Delta\theta\end{bmatrix}\circ\begin{bmatrix}K_t \\ K_s \end{bmatrix} \circ \begin{bmatrix}m \\ 1 \end{bmatrix}
|
||||
$$
|
||||
|
||||
For even more controllability, Raider Robotics passes $\Delta d$ through a [PD](https://en.wikipedia.org/wiki/PID_controller#Selective_use_of_control_terms) controller, and $\Delta\theta$ through a [PI](https://en.wikipedia.org/wiki/PID_controller#PI_controller) controller before converting them to motor values... and that is it! With just a couple formulæ, we have a fully functional autonomous point-to-point locomotion system.
|
||||
|
||||
For a real-world example of this method in use, check out 5024's robot (bottom right) and 1114's robot (bottom left). Both teams were running nearly the same implementation. We were both running autonomously for the first 15 seconds of the game:
|
||||
|
||||
<iframe
|
||||
src="https://www.youtube.com/embed/5Q39LIVcXSQ"
|
||||
style="width: 100%; aspect-ratio: 16 / 9;"
|
||||
title="YouTube video player"
|
||||
frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
allowfullscreen
|
||||
></iframe>
|
||||
|
||||
---
|
||||
|
||||
I hope someone will some day find this post helpful. Most papers about this topic went way over my head in 10th grade, or were over-complicated for the task. If you would like me to go further in depth on this topic, [contact me](/contact) and let me know. I will gladly help explain things, or write a new post further expanding on a topic.
|
57
src/collections/_posts/2020-08-23-notetaking-with-latex.md
Normal file
57
src/collections/_posts/2020-08-23-notetaking-with-latex.md
Normal file
@ -0,0 +1,57 @@
|
||||
---
|
||||
layout: default
|
||||
title: Taking notes with Markdown and LaTeX
|
||||
description: Using a lot of tech to replace a piece of paper
|
||||
date: 2020-08-23
|
||||
extra:
|
||||
excerpt: I have completely reworked my school notetaking system to use LaTeX. This post outlines how I did everything, and my new workflow.
|
||||
redirect_from:
|
||||
- /post/68df02l4/
|
||||
- /68df02l4/
|
||||
aliases:
|
||||
- /blog/2020/08/23/notetaking-with-latex
|
||||
- /blog/notetaking-with-latex
|
||||
---
|
||||
|
||||
*You can view my public demo for this post [here](https://github.com/Ewpratten/school-notes-demo)*
|
||||
|
||||
Recently, I have been on a bit of a mission to improve my school workflow with software. Over the past month, I have built a cleaner [student portal](https://github.com/Ewpratten/student_portal#unofficial-tvdsb-student-portal-webapp) for my school and [written a tool](https://github.com/Ewpratten/timeandplace-api#timeandplace-api--cli-application) for automating in-class attendance. Alongside working on these projects, I have also been refining my notetaking system for school.
|
||||
|
||||
Since 9th grade, I have been taking notes in a private GitHub repository in markdown, and have been compiling them to HTML using a makefile for each course. While this system has worked ok, It has been far from perfect. Recently, I have been working very hard to give this system a much-needed upgrade. Here is the new tech stack:
|
||||
|
||||
- The [Bazel buildsystem](https://bazel.build)
|
||||
- [Markdown](https://en.wikipedia.org/wiki/Markdown)
|
||||
- [LaTeX](https://en.wikipedia.org/wiki/LaTeX)
|
||||
- [MathJax](https://www.mathjax.org/)
|
||||
- [Beamer](https://ctan.org/pkg/beamer)
|
||||
- [Tikz & PGF](https://ctan.org/pkg/pgf)
|
||||
- [Pandoc](https://pandoc.org/)
|
||||
- [Zathura](https://pwmt.org/projects/zathura/)
|
||||
- [Starlark](https://docs.bazel.build/versions/master/skylark/language.html)
|
||||
- [Github Actions](https://github.com/features/actions) CI
|
||||
|
||||
The idea is that every course I take becomes its own Bazel package, with subpackages for things like assignments, papers, notes, and presentations. I can compile everything just by running the command `bazel build //:all`. All builds are cached using Bazel's build caching system, so when I run the command to compile my notes (I love saying that), I only end up compiling things that have changed since the last run. The setup for all of this is quite simple. All that is really needed is a Bazel workspace with the [`bazel_pandoc`](https://github.com/ProdriveTechnologies/bazel-pandoc) rules loaded (although I have opted to use some custom [genrules](https://docs.bazel.build/versions/master/be/general.html#genrule) instead). Using these rules, markdown files can be concatenated, and compiled into a PDF. I also use a modified version of the [Eisvogel](https://github.com/Wandmalfarbe/pandoc-latex-template) Pandoc template to make all my documents look a little neater.
|
||||
|
||||
In terms of workflow, I write all my notes as markdown files with [embedded LaTeX](https://pandoc.org/MANUAL.html#math) for any equations and charts I may need. All of this is done inside of VSCode, and I have a custom `tasks.json` file that lets me press <kbd>Ctrl</kbd> + <kbd>Shift</kbd> + <kbd>b</kbd> to re-compile whatever I am currently working on. I also keep Zathura open in a window to the side for a nearly-live preview system.
|
||||
|
||||
<script src="https://gist.github.com/Ewpratten/163aa9c9cb4e8c20e732e3713c95c915.js" ></script>
|
||||
|
||||

|
||||
*A screenshot of my workspace*
|
||||
|
||||
Now, the question came up of *"how do you easily distribute notes and assignments to classmates and professors?"*. That question got me stuck for a while, but here is the system I have come up with:
|
||||
|
||||
1. I write an assignment
|
||||
2. I push it to the private GitHub repository
|
||||
3. GitHub Actions picks up the deployment with a custom build script
|
||||
4. Every document is built into a PDF, and packaged with a directory listing generated by [`tree -H`](http://mama.indstate.edu/users/ice/tree/tree.1.html#XML/JSON/HTML%20OPTIONS)
|
||||
5. Everything is pushed to a subdomain on my website via GitHub pages
|
||||
6. I can share documents via URL to anyone
|
||||
|
||||
This is almost entirely accomplished by a shell script and a custom CI script.
|
||||
|
||||
<script src="https://gist.github.com/Ewpratten/4a69af01250291eb2981510feddef642.js"></script>
|
||||
|
||||
---
|
||||
|
||||
If you have any questions about this system, want me to write another post about it, or would like me to walk you through setting up a notes workspace of your own, [contact me](/contact)
|
250
src/collections/_posts/2020-09-03-bazel-and-avr.md
Normal file
250
src/collections/_posts/2020-09-03-bazel-and-avr.md
Normal file
@ -0,0 +1,250 @@
|
||||
---
|
||||
layout: default
|
||||
title: Compiling AVR-C code with a modern build system
|
||||
description: Bringing Bazel to 8-bit microcontrollers
|
||||
date: 2020-09-03
|
||||
tags:
|
||||
- avr
|
||||
- embedded
|
||||
- bazel
|
||||
- walkthrough
|
||||
extra:
|
||||
excerpt: In this post, I cover my process of combining low level programming with
|
||||
a very high level buildsystem.
|
||||
redirect_from:
|
||||
- /post/68dk02l4/
|
||||
- /68dk02l4/
|
||||
aliases:
|
||||
- /blog/2020/09/03/bazel-and-avr
|
||||
- /blog/bazel-and-avr
|
||||
---
|
||||
|
||||
*The GitHub repository for everything in this post can be found [here](https://github.com/Ewpratten/avr-for-bazel-demo)*
|
||||
|
||||
When writing software for an Arduino, or any other [AVR](https://en.wikipedia.org/wiki/AVR_microcontrollers)-based device, there are generally three main options. You can use the [Arduino IDE](https://www.arduino.cc/en/main/software) with [arduino-cli](https://github.com/arduino/arduino-cli), which is in my opinion, a clunky system that is great for high levels of abstraction and teaching people how to program, but lacks any kind of easy customization I am interested in. If you are looking for something more advanced (and works in your favorite IDE), you might look at [PlatformIO](https://platformio.org/). Finally, you can just program without any Hardware Abstraction Library at all, and use [avr-libc](https://www.nongnu.org/avr-libc/) along with [avr-gcc](https://www.microchip.com/mplab/avr-support/avr-and-arm-toolchains-c-compilers) and [avrdude](https://www.nongnu.org/avrdude/).
|
||||
|
||||
This final option is my favorite by far, as it both forces me to think about how the system I am building is actually working "behind the scenes", and lets me do everything exactly the way I want. Unfortunately, when working directly with the AVR system libraries, the only buildsystem / tool that is available (without a lot of extra work) is [Make](https://en.wikipedia.org/wiki/Make_(software)). As somebody who spends 90% of his time working with higher-level buildsystems like [Gradle](https://gradle.org/) and [Bazel](https://bazel.build), I don't really like needing to deal with Makefiles, and manually handle dependency loading. This got me thinking. I have spent a lot of time working in Bazel, and cross-compiling for the armv7l platform via the [FRC Toolchain](https://launchpad.net/~wpilib/+archive/ubuntu/toolchain/). How hard can it be to add AVR Toolchain support to Bazel?
|
||||
|
||||
*The answer: Its pretty easy.*
|
||||
|
||||
The Bazel buildsystem allows users to define custom toolchains via the [toolchain](https://docs.bazel.build/versions/master/toolchains.html) rule. I am going to assume you have a decent understanding of the [Starlark](https://docs.bazel.build/versions/master/skylark/language.html) DSL, or at least Python3 (which Starlark is syntactically based on). To get started setting up a Bazel toolchain, I create empty `WORKSPACE` and `BUILD` files, along with a new bazel package named `toolchain` that has a bazel file inside for the toolchain settings, a `.bazelrc` file, and a package to store my test program.
|
||||
|
||||
```
|
||||
/project
|
||||
|
|
||||
+-.bazelrc
|
||||
+-BUILD
|
||||
+-example
|
||||
| |
|
||||
| +-BUILD
|
||||
| +-main.cc
|
||||
+-toolchain
|
||||
| |
|
||||
| +-BUILD
|
||||
| +-avr.bzl
|
||||
+-WORKSPACE
|
||||
```
|
||||
|
||||
I only learned about this recently, but you can use a `.bazelrc` file to define constant arguments to be passed to the buildsystem per-project. For this project, I am adding the following arguments to the config file to define which toolchain to use for which target:
|
||||
|
||||
```sh
|
||||
# .bazelrc
|
||||
|
||||
# Use our custom-configured c++ toolchain.
|
||||
build:avr_config --crosstool_top=//toolchain:avr_suite
|
||||
build:avr_config --cpu=avr
|
||||
|
||||
# Use the default Bazel C++ toolchain to build the tools used during the
|
||||
# build.
|
||||
build:avr_config --host_crosstool_top=@bazel_tools//tools/cpp:toolchain
|
||||
```
|
||||
|
||||
This config will default all builds to use a custom toolchain named `avr_suite`, and compile to target the `avr` CPU architecture. But, the final line will make sure to use the host's toolchain for compiling tools needed for Bazel itself (since we can't run AVR code on the host machine). With this, we now have everything needed to tell Bazel what to use when building, but we have not actually defined the toolcahin in the first place. This step comes in two parts. First, we need to define a toolchain implementation (this happens in `avr.bzl`). This implementation will define things like, where to find every tool on the host, which libc version to use, and what types of tools are provided by avr-gcc in the first place. We can start out by adding some `load` statements to the file to tell Bazel what functions we need to use.
|
||||
|
||||
```python
|
||||
# toolchain/avr.bzl
|
||||
|
||||
load("@bazel_tools//tools/build_defs/cc:action_names.bzl", "ACTION_NAMES")
|
||||
load(
|
||||
"@bazel_tools//tools/cpp:cc_toolchain_config_lib.bzl",
|
||||
"feature",
|
||||
"flag_group",
|
||||
"flag_set",
|
||||
"tool_path",
|
||||
)
|
||||
```
|
||||
|
||||
Once this is done, we need to define everything that this toolchain implementation can do. In this case avr-gcc can link executables, link dynamic libraries, and link a "nodeps" dynamic library.
|
||||
|
||||
```python
|
||||
# ...
|
||||
|
||||
all_link_actions = [
|
||||
ACTION_NAMES.cpp_link_executable,
|
||||
ACTION_NAMES.cpp_link_dynamic_library,
|
||||
ACTION_NAMES.cpp_link_nodeps_dynamic_library,
|
||||
]
|
||||
```
|
||||
|
||||
We also need to tell Bazel where to find every tool. This may vary platform-to-platform, but with a standard avr-gcc install on Linux, the following should work just fine. Experienced Bazel users may wish to make use of Bazel's [`config_setting` and `select`](https://docs.bazel.build/versions/master/configurable-attributes.html) rules to allow the buildsystem to run on any type of host via a CLI flag.
|
||||
|
||||
```python
|
||||
# ...
|
||||
|
||||
tool_paths = [
|
||||
tool_path(
|
||||
name = "gcc",
|
||||
path = "/usr/bin/avr-gcc",
|
||||
),
|
||||
tool_path(
|
||||
name = "ld",
|
||||
path = "/usr/bin/avr-ld",
|
||||
),
|
||||
tool_path(
|
||||
name = "ar",
|
||||
path = "/usr/bin/avr-ar",
|
||||
),
|
||||
tool_path(
|
||||
name = "cpp",
|
||||
path = "/usr/bin/avr-g++",
|
||||
),
|
||||
tool_path(
|
||||
name = "gcov",
|
||||
path = "/usr/bin/avr-gcov",
|
||||
),
|
||||
tool_path(
|
||||
name = "nm",
|
||||
path = "/usr/bin/avr-nm",
|
||||
),
|
||||
tool_path(
|
||||
name = "objdump",
|
||||
path = "/usr/bin/avr-objdump",
|
||||
),
|
||||
tool_path(
|
||||
name = "strip",
|
||||
path = "/usr/bin/avr-strip",
|
||||
),
|
||||
]
|
||||
```
|
||||
|
||||
Finally, we need to define the actual avr-toolchain implementation. This can be done via a simple function, and the creation of a new custom rule:
|
||||
|
||||
```python
|
||||
# ...
|
||||
|
||||
def _avr_impl(ctx):
|
||||
features = [
|
||||
feature(
|
||||
name = "default_linker_flags",
|
||||
enabled = True,
|
||||
flag_sets = [
|
||||
flag_set(
|
||||
actions = all_link_actions,
|
||||
flag_groups = ([
|
||||
flag_group(
|
||||
flags = [
|
||||
"-lstdc++",
|
||||
],
|
||||
),
|
||||
]),
|
||||
),
|
||||
],
|
||||
),
|
||||
]
|
||||
|
||||
return cc_common.create_cc_toolchain_config_info(
|
||||
ctx = ctx,
|
||||
toolchain_identifier = "avr-toolchain",
|
||||
host_system_name = "local",
|
||||
target_system_name = "local",
|
||||
target_cpu = "avr",
|
||||
target_libc = "unknown",
|
||||
compiler = "avr-g++",
|
||||
abi_version = "unknown",
|
||||
abi_libc_version = "unknown",
|
||||
tool_paths = tool_paths,
|
||||
cxx_builtin_include_directories = [
|
||||
"/usr/lib/avr/include",
|
||||
"/usr/lib/gcc/avr/5.4.0/include"
|
||||
],
|
||||
)
|
||||
|
||||
cc_toolchain_config = rule(
|
||||
attrs = {},
|
||||
provides = [CcToolchainConfigInfo],
|
||||
implementation = _avr_impl,
|
||||
)
|
||||
```
|
||||
|
||||
The `cxx_builtin_include_directories` argument is very important. This tells the compiler where to find the libc headers. **Both** paths are required, as the headers are split between two directories on Linux for some reason. We are now done with the `avr.bzl` file, and can add the following to the `toolchain` package's `BUILD` file to register our custom toolcahin as an official CC toolchain for Bazel to use:
|
||||
|
||||
```python
|
||||
# toolchain/BUILD
|
||||
|
||||
load("@rules_cc//cc:defs.bzl", "cc_toolchain", "cc_toolchain_suite")
|
||||
load(":avr.bzl", "cc_toolchain_config")
|
||||
|
||||
cc_toolchain_config(name = "avr_toolchain_config")
|
||||
|
||||
cc_toolchain_suite(
|
||||
name = "avr_suite",
|
||||
toolchains = {
|
||||
"avr": ":avr_toolchain",
|
||||
},
|
||||
)
|
||||
|
||||
filegroup(name = "empty")
|
||||
|
||||
cc_toolchain(
|
||||
name = "avr_toolchain",
|
||||
all_files = ":empty",
|
||||
compiler_files = ":empty",
|
||||
dwp_files = ":empty",
|
||||
linker_files = ":empty",
|
||||
objcopy_files = ":empty",
|
||||
strip_files = ":empty",
|
||||
supports_param_files = 0,
|
||||
toolchain_config = ":avr_toolchain_config",
|
||||
toolchain_identifier = "avr-toolchain",
|
||||
)
|
||||
```
|
||||
|
||||
Thats it. Now, if we wanted to compile a simple blink program in AVR-C, we can add the following to `main.cc`:
|
||||
|
||||
```cpp
|
||||
#ifndef F_CPU
|
||||
#define F_CPU 16000000UL
|
||||
#endif
|
||||
|
||||
#include <avr/io.h>
|
||||
#include <util/delay.h>
|
||||
|
||||
int main(void)
|
||||
{
|
||||
DDRC = 0xFF;
|
||||
while(1) {
|
||||
PORTC = 0xFF;
|
||||
_delay_ms(1000);
|
||||
PORTC= 0x00;
|
||||
_delay_ms(1000);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
To compile this, just define a `cc_binary` in the example `BUILD` file just like any normal Bazel program.
|
||||
|
||||
```python
|
||||
# example/BUILD
|
||||
|
||||
load("@rules_cc//cc:defs.bzl", "cc_binary")
|
||||
|
||||
cc_binary(
|
||||
name = "example",
|
||||
srcs = ["main.cc"],
|
||||
# Add any needed cc options here for your specific platform
|
||||
)
|
||||
```
|
||||
|
||||
This can be compiled with `bazel build //example --config=avr_config`, and the output binary will be in the `bazel-bin` directory. You can run `avr-objcopy` and `avrdude` manually just like with a normal program.
|
||||
|
||||
Importantly, every normal Bazel function will still work. Want to include [EigenArduino](https://github.com/vancegroup/EigenArduino) in your project? Just import the [`rules_foreign_cc`](https://github.com/bazelbuild/rules_foreign_cc) ruleset and load the Eigen library like normal. You can also run unit tests through Bazel's regular [testing rules](https://docs.bazel.build/versions/master/be/c-cpp.html#cc_test). If you are a masochist, you could even try loading the [pybind11 rules](https://github.com/pybind/pybind11_bazel) and embedding a Python interpreter in your code.
|
115
src/collections/_posts/2020-09-10-codespaces-for-frc.md
Normal file
115
src/collections/_posts/2020-09-10-codespaces-for-frc.md
Normal file
@ -0,0 +1,115 @@
|
||||
---
|
||||
layout: default
|
||||
title: Integrating GitHub Codespaces with FRC
|
||||
description: Robotics software development in your browser
|
||||
date: 2020-09-10
|
||||
tags:
|
||||
- github
|
||||
- frc
|
||||
- project
|
||||
- java
|
||||
extra:
|
||||
excerpt: 'I was recently accepted into the GitHub Codespaces beta test program and
|
||||
decided to try it out on the largest open source project I am currently involved
|
||||
with. '
|
||||
aliases:
|
||||
- /blog/2020/09/10/codespaces-for-frc
|
||||
- /blog/codespaces-for-frc
|
||||
---
|
||||
|
||||
I was recently accepted into the [GitHub Codespaces](https://github.com/features/codespaces) beta test program. After reading through the documentation, I wanted to find a good use for this new tool, and decided to try it out on the largest open source project I am currently involved with. At *Raider Robotics* (@frc5024), we maintain a fairly large robotics software library called [Lib5K](https://github.com/frc5024/lib5k). The goal of this library is to provide an easy-to-use framework for new programmers to use when writing control systems code. As this library has become more complex, we have recently forked it into its own GitHub repository, and completely reworked our dependency system to match that of any other large OSS project. I figured that setting this repository up to use Codespaces might make it easier for other developers at Raider Robotics to make small changes to the library without needing to pull in the nearly 5GB of dependencies needed just to compile the codebase.
|
||||
|
||||
I am quite impressed at how easy it is to set up a Codespace environment. All you need to do is, load a pre-made docker image, and write some JSON to configure the environment. I decided to write a custom Dockerfile that extends the [`mcr.microsoft.com/vscode/devcontainers/base:ubuntu`](https://hub.docker.com/_/microsoft-vscode-devcontainers) base image.
|
||||
|
||||
```dockerfile
|
||||
FROM mcr.microsoft.com/vscode/devcontainers/base:ubuntu
|
||||
|
||||
RUN apt update -y
|
||||
RUN apt install sudo -y
|
||||
|
||||
# Install needed packages
|
||||
RUN sudo apt install -y python3 python3-pip
|
||||
RUN sudo apt install -y curl wget
|
||||
RUN sudo apt install -y zip unzip
|
||||
|
||||
# Install sdkman
|
||||
RUN curl -s "https://get.sdkman.io?rcupdate=true" | bash
|
||||
|
||||
# Install java
|
||||
RUN bash -c "source /root/.sdkman/bin/sdkman-init.sh && sdk install java 11.0.8-open"
|
||||
|
||||
# Install gradle
|
||||
RUN bash -c "source /root/.sdkman/bin/sdkman-init.sh && sdk install gradle"
|
||||
|
||||
RUN echo "source /root/.sdkman/bin/sdkman-init.sh" >> /root/.bashrc
|
||||
|
||||
# Install WPILib
|
||||
RUN wget https://github.com/wpilibsuite/allwpilib/releases/download/v2020.3.2/WPILib_Linux-2020.3.2.tar.gz -O wpilib.tar.gz
|
||||
RUN mkdir -p /root/wpilib/2020
|
||||
RUN tar -zxvf wpilib.tar.gz -C /root/wpilib/2020
|
||||
```
|
||||
|
||||
All that is being done in this container is:
|
||||
|
||||
- Installing Python
|
||||
- Installing [sdkman](https://sdkman.io)
|
||||
- Installing OpenJDK 11
|
||||
- Installing Gradle
|
||||
- Installing [WPILib](https://github.com/wpilibsuite/allwpilib/)
|
||||
|
||||
In the world of FRC development, almost all codebases depend on a library and toolset called WPILib. The tar file that is downloaded contains a copy of the library, all JNI libraries depended on by WPILib itself, some extra tooling, and a custom JVM built specifically to run on the [NI RoboRIO](https://www.ni.com/en-ca/support/model.roborio.html).
|
||||
|
||||
With this docker container, all we need to do is tell GitHub how to set up a Codespace for the repo. This is done by placing a file in `.devcontainer/devcontainer.json`:
|
||||
|
||||
```js
|
||||
// .devcontainer/devcontainer.json
|
||||
{
|
||||
// Name of the environment
|
||||
"name":"General FRC Development",
|
||||
|
||||
// Set the Docker image to use
|
||||
// I will explain this below
|
||||
"image":"ewpratten/frc_devcontainer:2020.3.2",
|
||||
|
||||
// Set any default VSCode settings here
|
||||
"settings": {
|
||||
"terminal.integrated.shell.linux":"/bin/bash"
|
||||
},
|
||||
|
||||
// Tell VSCode where to find the workspace directory
|
||||
"workspaceMount": "source=${localWorkspaceFolder},target=/root/workspace,type=bind,consistency=cached",
|
||||
"workspaceFolder": "/root/workspace",
|
||||
|
||||
// Allow the host and container docker daemons to communicate
|
||||
"mounts": [ "source=/var/run/docker.sock,target=/var/run/docker-host.sock,type=bind" ],
|
||||
|
||||
// Any extensions you want can go here
|
||||
"extensions": [
|
||||
// Needed extensions for using WPILib
|
||||
"redhat.java",
|
||||
"ms-vscode.cpptools",
|
||||
"vscjava.vscode-java-pack",
|
||||
|
||||
// The WPILib extension itself
|
||||
"wpilibsuite.vscode-wpilib"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Notice the line `"image":"ewpratten/frc_devcontainer:2020.3.2",`. This is telling VSCode and Codespaces to pull a docker image from my docker hub account. Instead of making Codespaces build a docker image for itself when it loads, I have pre-built the image, and published it [here](https://hub.docker.com/r/ewpratten/frc_devcontainer). The reason for this is quite simple. Codespaces will flat out crash if it tries to build my dockerfile due to WPILib just being too big.
|
||||
|
||||
With a minimal amount of work, I got everything needed to develop and test FRC robotics code running in the browser via Codespaces.
|
||||
|
||||

|
||||
|
||||
*Launching Codespaces from a GitHub repository*
|
||||
|
||||

|
||||
|
||||
*Editing code in the browser*
|
||||
|
||||
|
||||
<!-- - Pushing a container this large to dockerhub requires the daemon to be restarted with -->
|
||||
|
||||
<!-- sudo systemctl stop docker
|
||||
sudo dockerd -s overlay --max-concurrent-uploads=1 -->
|
87
src/collections/_posts/2020-09-17-ultralight-writeup.md
Normal file
87
src/collections/_posts/2020-09-17-ultralight-writeup.md
Normal file
@ -0,0 +1,87 @@
|
||||
---
|
||||
layout: default
|
||||
title: Building a mini maven server
|
||||
description: 'Project overview: The Ultralight maven server'
|
||||
date: 2020-09-17
|
||||
written: 2020-09-05
|
||||
tags:
|
||||
- project
|
||||
- github
|
||||
- maven
|
||||
- java
|
||||
extra:
|
||||
excerpt: In this post, I explain the process of building my own personal maven
|
||||
server, and show how simple maven servers really are.
|
||||
redirect_from:
|
||||
- /post/2jf002s4/
|
||||
- /2jf002s4/
|
||||
aliases:
|
||||
- /blog/2020/09/17/ultralight-writeup
|
||||
- /blog/ultralight-writeup
|
||||
---
|
||||
|
||||
I have been looking around for a small, and easy-to-use [maven](https://maven.apache.org/) server to host my personal Java libraries for some time now. I origionally went with [Jitpack.io](https://jitpack.io/), but didn't like the fact I jitpack overwrites artifact `groupID` fields. This means that instead of specifying a package via something like `ca.retrylife:librandom:1.0.0`, a user would have to write `com.github.ewpratten:librandom:1.0.0`. While this is not a huge deal, I prefer to use a `gorupID` under my own domain for branding reasons. Along with this issue, I just didn't have enough control over my artifacts with Jitpack.
|
||||
|
||||
From Jitpack, I moved on to hosting a maven server in a docker container on one of my webservers. This worked fine until my server crashed from a configuration issue. I decided that self-hosting was not the way to go until I have set up a more stable storage infrastructure.
|
||||
|
||||
After my attempt at self-hosting, I moved to (and quickly away from) [GitHub Packages](https://github.com/features/packages). GitHub Packages is a great service with a huge drawback. Anyone wanting to use one of my libraries must authenticate with the github maven servers. Along with that, the buildsystem configuration to actually load a GitHub Packages artifact is currently a bit of a mess. While GitHub staff have addressed this issue, and a way to load packages without authentication is roumered to be coming to the platform sometime soon, I don't want to wait. After this adventure, I got curious.
|
||||
|
||||
<div class="center" markdown="1">
|
||||
|
||||
> *How hard is it to write my own maven server?*
|
||||
|
||||
</div>
|
||||
|
||||
Turns out, not very hard at all.
|
||||
|
||||
Maven servers are basically glorified static site generators that serve specific files in specific places. On top of this, the entire protocol is XML-based, which makes building one super easy. When a buildsystem like Maven or Gradle wants to fetch an artifact from a maven server, it first makes a request to `http(s)://<baseurl>/<groupID>/<artifactID>/<version>/<artifactID>-<version>.pom` to find out any needed data about the requested artifact. An example of this file's contents could be:
|
||||
|
||||
```xml
|
||||
<!-- Response for http://maven.example.com/ca/retrylife/librandom/1.0.0/librandom-1.0.0.pom -->
|
||||
<project
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"
|
||||
xmlns="http://maven.apache.org/POM/4.0.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
<groupId>ca.retrylife</groupId>
|
||||
<artifactId>librandom</artifactId>
|
||||
<version>1.0.0</version>
|
||||
</project>
|
||||
```
|
||||
|
||||
I don't exactly know the reason this file exists in most cases, since all the data returned is data the client already knows. Judging by the [Project Object Model](https://maven.apache.org/guides/introduction/introduction-to-the-pom.html) specifications, some servers might use this file to return additional metadata about the artifact, but none of this data is required for my minimal working example.
|
||||
|
||||
Along with this request, another is sometimes made to `http(s)://<baseurl>/<groupID>/<artifactID>/maven-metadata.xml`, which is an XML file containing a list of all artifact versions stored on the server. From my testing with Gradle, a call to this endpoint is only made if there is a wildcard in the asset name in a user's build configuration. An example of this would be `ca.retrylife:librandom:1.+`. An exampe of this file's contents could be:
|
||||
|
||||
```xml
|
||||
<metadata modelVersion="1.1.0">
|
||||
<groupId>ca.retrylife</groupId>
|
||||
<artifactId>librandom</artifactId>
|
||||
<version>1.0.1</version>
|
||||
<versioning>
|
||||
<latest>1.0.1</latest>
|
||||
<release>1.0.1</release>
|
||||
<versions>
|
||||
<version>1.0.1</version>
|
||||
<version>1.0.0</version>
|
||||
</versions>
|
||||
<lastUpdated>1599079384</lastUpdated>
|
||||
</versioning>
|
||||
</metadata>
|
||||
```
|
||||
|
||||
Finally, a request is made to `http(s)://<baseurl>/<groupID>/<artifactID>/<version>/<artifactID>-<version>.jar`, which should just return the correct JAR file for the library. Pretty simple.
|
||||
|
||||
## The magic behind Ultralight
|
||||
|
||||
[Ultralight Maven](https://ultralight.retrylife.ca) is a small serverless maven server I built for myself. The Ultralight backend app listens for each of these three requests, and will handle each of the following cases. I use a YAML file to tell the backend what artifact names I want it to "serve", and their GitHub repository names.
|
||||
|
||||
***Case 1.*** The client has asked for a Project Object Model for an artifact. In this case, the backend will make sure the requested artifact name is listed in its configuration file, then simply parse all of the needed data out of the request URL, and send it right back to the client.
|
||||
|
||||
***Case 2.*** The client has asked for a `maven-metadata.xml` file. In this case, the backend will first make sure the artifact exists, then make a request out to the [GitHub REST API](https://docs.github.com/en/rest), and ask for a list of all tag names in the artifact's repository. For every tag that contains an asset with the same name as the artifact, the tag's version number will be added to the list of valid versions in the response.
|
||||
|
||||
***Case 3.*** The client has asked for an artifact's JAR file. In this case, the backend will first make sure the artifact exists, then make a request out to the GitHub API, and ask for the correct asset URL on GitHub's servers. With this url, Ultralight just crafts an [HTTP 302](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/302) response. This makes the client actually request from GitHub itself instead of the Ultralight server, thus Ultralight never needs to store any artifacts.
|
||||
|
||||
Both to make the experience faster, and to get around GitHub's rate limiting on the tags API, Ultralight sends the client [`stale-while-revalidate`](https://vercel.com/docs/edge-network/caching#stale-while-revalidate) cache control headers. This forces the Vercel server that hosts Ultralight to only update its cache once per minute (slightly slower than the GitHub rate limit 😉)
|
||||
|
||||
For instructions on how to set up your own maven server using Ultralight, see the [README](https://github.com/Ewpratten/ultralight#ultralight) on GitHub.
|
61
src/collections/_posts/2020-09-24-gopro-webcam.md
Normal file
61
src/collections/_posts/2020-09-24-gopro-webcam.md
Normal file
@ -0,0 +1,61 @@
|
||||
---
|
||||
layout: default
|
||||
title: 'My workflow: video conference edition'
|
||||
description: Turning some spare filmmaking equipment into a high-quality video conference
|
||||
setup
|
||||
date: 2020-09-24
|
||||
written: 2020-09-13
|
||||
tags: video cameras workflow
|
||||
extra:
|
||||
excerpt: As my courses have moved mostly online, I have looked to improve my live
|
||||
video setup. This post covers how I stream sharp HD video at home, and some interesting
|
||||
quirks of the setup.
|
||||
redirect_from:
|
||||
- /post/XcaM04o4/
|
||||
- /XcaM04o4/
|
||||
aliases:
|
||||
- /blog/2020/09/24/gopro-webcam
|
||||
- /blog/gopro-webcam
|
||||
---
|
||||
|
||||
It has been quite some fun writing about my workflows for various day-to-day things on this blog recently, and since I have been getting a lot of positive feedback from my last few workflow-related posts, I am planning to continue writing them.
|
||||
|
||||
As my courses and work have moved mostly online, I have looked to improve my home setup. This started out with investing in another monitor to be dedicated to displaying server and network status info for my recent summer internship at [Industrial Brothers](https://industrialbrothers.com/). After that, I started looking in to purchasing a high-end condenser microphone and another audio interface to drive it, but quickly discovered that Lenovo did such a good job on the internal mic in my [ThinkPad T480](https://www.lenovo.com/ca/en/laptops/thinkpad/thinkpad-t-series/ThinkPad-T480/p/22TP2TT4800) that it is hard to buy a better microphone without spending a large chunk of money.
|
||||
|
||||
So, I can keep video conferences on their own screen, while still doing work on the other two, plus I have a pretty good mic. There is only one thing left to upgrade. The webcam!
|
||||
|
||||
I went searching for something decent, and immediately encountered a large issue. Everyone is sold out of webcams. That's fair, there is a massive market for them right now, and not many companies actively producing new products. This lead me to a secondary option, one that I was planning to do quite a while ago when I ran a joint gaming YouTube channel with a friend in elementary school.
|
||||
|
||||
Commonly, professional videographers will actually use a spare DSLR or point&shoot camera with a video output as their webcam. When I had originally looked in to doing this, I was turned off by the insane prices of capture cards. At the time, I was only familiar with large tech brands, and seeing a mid-tier capture card for $500 wasn't exactly 12-year-old-me-friendly.
|
||||
|
||||
More recently, I have gotten in to searching through eBay for generally Chinese manufacturing facilities selling unbranded products in bulk directly. Since these products are basically coming straight off an assembly line, and not going through any other company, they end up being ridiculously cheap! *(I recommend looking for Arduinos this way. You can usually acquire them in batches at $1.15 per board)*
|
||||
|
||||
During one of my searches, I stumbled across a few sellers selling batches of generic / un-branded capture cards. Clearly designed to be resold with custom branding. You can pick one of these up for only $15! The one I have can do 1080p/60 and 2.7k/30 (disclaimer: these are the only resolutions I have tried. It can probably handle other sizes).
|
||||
|
||||
Another nice thing about the card I got, since it isn't made by some company that requires special software to run their products (cough, Elgato, cough) the device was plug&play compatible with my Ubuntu laptop!
|
||||
|
||||
## Alright. Enough about cheap capture cards, and on to my setup.
|
||||
|
||||
For the actual camera, I opted for the *GoPro Hero 5 Black edition* that has been sitting on my desk unused for the past year. Unfortunately, this camera is no longer sold by GoPro. If you are looking to replicate this setup, I can strongly recommend picking up the [Hero 7 Black edition](https://gopro.com/en/us/shop/cameras/hero7-black/CHDHX-701-master.html), which is the most recent GoPro camera to support live HDMI video without needing a bunch of accessories.
|
||||
|
||||
Speaking of HDMI video, I also picked up a Micro HDMI to HDMI cable to connect from the camera's output to the capture card. [This one here](https://www.ebay.ca/itm/Micro-HDMI-to-HDMI-Cable-Supports-Ethernet-3D-1080P-Audio-Return-3-6-10-15FT/193637232780?hash=item2d15adb08c:g:KDMAAOSwNGNfRdnR) should do the trick, and is only $8.
|
||||
|
||||
For mounting, I went with a [Jolby GorillaPod](https://joby.com/global/gorillapod/), but let's be honest. These are GoPros. There is no shortage of mounting solutions for them. You could do a first-person-view conference call if you really wanted. (New project idea..?)
|
||||
|
||||
## Software Setup
|
||||
|
||||
The software setup process is quite simple, and fairly painless. First, near the end of every GoPro's settings menu is a mode selector for how HDMI output should behave. Setting this to `live` will cause the camera to output exactly what it sees, without any status icons or timestamps, to the capture card.
|
||||
|
||||
On the computer end, these cheap capture cards identify themselves as webcams. So there isn't really much setup needed. That being said, many people I know like to send their capture card output into [Open Broadcaster Software](https://obsproject.com/), process the feed, then export it as a [virtual webcam](https://obsproject.com/forum/resources/obs-virtualcam.949/).
|
||||
|
||||
## Some neat things to try
|
||||
|
||||
Experimenting with my camera's many video cropping / scaling modes has been quite fun. I have discovered that keeping the camera in `linear` mode is good for general usage and presenting, and switching to `superwide` is great if I need to physically demonstrate or show something.
|
||||
|
||||
I recently remembered that GoPros also have [voice commands](https://www.captureguide.com/gopro-voice-commands/). I have been using this to switch between `timelapse`, `video`, and `photo` modes, where I have saved a video preset in each. This is a very cheaty way to change my camera settings on the fly without needing to use the GoPro app on my phone. Here is what each of these modes is set to do on my camera:
|
||||
|
||||
| Mode | Action |
|
||||
|-----------|---------------------------------------------------------------------------------------------------|
|
||||
| timelapse | Narrow view, zoomed in on my face. This looks like a normal laptop webcam |
|
||||
| video | Linear view. Very crisp, and auto-lowlight handling enabled. This looks like I'm using a DSLR |
|
||||
| photo | Superview. Zoomed all the way out, at full resolution. It's so wide, you can see whats on my desk |
|
172
src/collections/_posts/2020-10-01-reading-a-bitmap.md
Normal file
172
src/collections/_posts/2020-10-01-reading-a-bitmap.md
Normal file
@ -0,0 +1,172 @@
|
||||
---
|
||||
layout: default
|
||||
title: Reading metadata from a bitmap file
|
||||
description: A project writeup
|
||||
date: 2020-10-01
|
||||
written: 2020-09-15
|
||||
tags: project c images
|
||||
extra:
|
||||
excerpt: Inspired from one of my friend's projects, I built a small tool for displaying
|
||||
bitmap file info from the command line.
|
||||
redirect_from:
|
||||
- /post/XcaMdj2m/
|
||||
- /XcaMdj2m/
|
||||
uses:
|
||||
- github-cards
|
||||
aliases:
|
||||
- /blog/2020/10/01/reading-a-bitmap
|
||||
- /blog/reading-a-bitmap
|
||||
---
|
||||
|
||||
Recently, @rsninja722 was telling me about [a project](https://github.com/rsninja722/file2bmp) he was working on. The basic idea is that you pass a file into his program, and it generates a bitmap of the binary data. This was inspired by [an old post of mine](@/blog/2019-09-11-Buildingimgfrombin.md) where I did the same thing with a horribly written Python script and the library [`pillow`](https://github.com/python-pillow/Pillow).
|
||||
|
||||
Both of us are currently teaching ourselves the **C** programming language. Him, for a break from JavaScript. Me, for no particular reason. As somebody who mostly lives in the world of high-level C-family languages (C++ and Python), learning C has been a challenging, fun, and rewarding experience. I enjoy immersing myself in *"the old way of doing things"*. This means sitting down with my Father's old [*ANSI Standard C Programmer's Reference*](https://archive.org/search.php?query=external-identifier%3A%22urn%3Aoclc%3Arecord%3A1028045558%22) book, and looking up what I need to know through a good old appendix full of libc headers and their function lists.
|
||||
|
||||
While @rsninja722 was working on his project, I found myself using `xxd` and `python3` a lot to debug small issues he encountered. This is fairly tedious, so I set out to write myself a tool to help. I have a small GitHub repository called [smalltools](https://github.com/Ewpratten/smalltools) where I keep the source code to a few small programs I write for fun. I added a new tool file to the repo (called `bmpinfo`) and got to work.
|
||||
|
||||
## How does a bitmap work?
|
||||
|
||||
This was the first big question. I had learned a while ago when working on another project that the image data stored in a bitmap is just raw pixel values, but aside from that, I had no clue how this file format works. Luckily, Wikipedia came to the rescue (as per usual) with [this great article](https://en.wikipedia.org/wiki/BMP_file_format). It turns out that the file metadata, like the pixel values, is stupidly simple to work with**<sup>1, 2</sup>**.
|
||||
|
||||
<div style="color:gray;" markdown="1">
|
||||
|
||||
***1.** I am going to cover only images with `24-bit` color, with no compression*<br>
|
||||
***2.** All integers in a bitmap are little-[endian](https://en.wikipedia.org/wiki/Endianness). These must be converted to the host's endianness*
|
||||
|
||||
</div>
|
||||
|
||||
A simple bitmap file consists of only three parts (although the specification can support more data):
|
||||
|
||||
1. A file header
|
||||
2. File information / metadata
|
||||
3. Pixel data
|
||||
|
||||
I will cover each individually.
|
||||
|
||||
### The file header
|
||||
|
||||
Like any other standard binary file format, bitmaps start with a file header. This is a block of data that tells programs what this file is, and how it works. The bitmap file header starts with two characters that tell programs what type of bitmap this is. I have only worked with **BM** type files, but the following are all possible file types:
|
||||
|
||||
| Identifier | Type |
|
||||
|------------|--------------------------------|
|
||||
| **BM** | Windows 3.1x, 95, NT, ... etc. |
|
||||
| **BA** | OS/2 struct bitmap array |
|
||||
| **CI** | OS/2 struct color icon |
|
||||
| **CP** | OS/2 const color pointer |
|
||||
| **IC** | OS/2 struct icon |
|
||||
| **PT** | OS/2 pointer |
|
||||
|
||||
|
||||
The rest of the data is fairly standard. Since I am working in **C**, I have defined this data as a [`struct`](https://en.wikipedia.org/wiki/Struct_(C_programming_language)). Here is the header:
|
||||
|
||||
```c
|
||||
typedef struct {
|
||||
// File signature
|
||||
char signature[2];
|
||||
|
||||
// File size
|
||||
uint32_t size;
|
||||
|
||||
// Reserved data
|
||||
uint16_t reservedA;
|
||||
uint16_t reservedB;
|
||||
|
||||
// Location of the first pixel
|
||||
uint32_t data_offset;
|
||||
} header_t;
|
||||
```
|
||||
|
||||
### Bitmap Information Header
|
||||
|
||||
The *Bitmap Information Header* (also called **DIB**) contains more information about the file, and can vary in size based on the program that created it. As mentioned earlier, I will only cover the simplest implementation. Due to the possibility of multiple DIB formats, the first element of the header is its own size in bytes. This way, any program can handle any size of DIB without needing to actually implement every header tpe.
|
||||
|
||||
Like the file header, I have also written this as a `struct`.
|
||||
|
||||
```c
|
||||
typedef struct {
|
||||
// Size of self
|
||||
uint32_t size;
|
||||
|
||||
// Image dimensions in pixels
|
||||
int32_t width;
|
||||
int32_t height;
|
||||
|
||||
// Image settings
|
||||
uint16_t color_planes;
|
||||
uint16_t color_depth;
|
||||
uint32_t compression;
|
||||
uint32_t raw_size; // This is generally unused
|
||||
|
||||
// Resolution in pixels per metre
|
||||
int32_t horizontal_ppm;
|
||||
int32_t vertical_ppm;
|
||||
|
||||
// Other settings
|
||||
uint32_t color_table;
|
||||
uint32_t important_colors;
|
||||
} info_t;
|
||||
```
|
||||
|
||||
Some notes about the data in this header:
|
||||
|
||||
- Image dimensions are **signed** integers. Using a negative size will cause image data to be read right-to-left and bottom-to-top
|
||||
- A setting is present for the pixel density of the image. This is measured in pixels-per-metre (usually `3780`)
|
||||
- The `color_table` is the number of colors used in the palette. This defaults to `0` (meaning *all*)
|
||||
- The `important_colors` is the number of colors that are important in the image. This defaults to `0` (meaning *all*) and is almost never used
|
||||
|
||||
### Pixel data
|
||||
|
||||
After the file headers comes the pixel data. This is written pixel-by-pixel, and is stored as 3 bytes in the format `BBGGRR` (little-endian, remember?).
|
||||
|
||||
## Loading a bitmap file into a C program
|
||||
|
||||
For simplicity, I am going to write this for a computer that is based on a little-endian architecture. In reality, most computers are big-endian, and require that you [reverse the endian](https://codereview.stackexchange.com/a/151070) of everything read in.
|
||||
|
||||
```c
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
#include <stdint.h>
|
||||
|
||||
// Headers defined above
|
||||
extern struct header_t;
|
||||
extern struct info_t;
|
||||
|
||||
typedef struct {
|
||||
uint8_t blue;
|
||||
uint8_t green;
|
||||
uint8_t red;
|
||||
} pixel_t;
|
||||
|
||||
int main(){
|
||||
// Read a bitmap
|
||||
FILE* p_bmp = fopen("myfile.bmp", "rb");
|
||||
|
||||
// Create header and info data
|
||||
header_t header;
|
||||
info_t info;
|
||||
|
||||
// Read from the file.
|
||||
// Some compilers will pad structs, so I have
|
||||
// manually entered their sizes (14, and 40 bytes)
|
||||
fread(&header, 14, 1, p_bmp);
|
||||
fread(&info, 40, 1, p_bmp);
|
||||
|
||||
// Read every pixel
|
||||
while(1){
|
||||
pixel_t pixel;
|
||||
if(fread(&pixel, 3, 1, p_bmp) == 0) break;
|
||||
|
||||
// Do something with the pixel
|
||||
// ...
|
||||
}
|
||||
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
## And thats it!
|
||||
|
||||
Reading bitmap data is really quite simple. Of course, there are many sub-standards and formats that require more code, and sometimes decompression algorithms, but this is just an overview.
|
||||
|
||||
If you would like to see the small library I built for myself for doing this, take a look [here](https://github.com/Ewpratten/smalltools/tree/master/utils/img). (it includes endianness handling)
|
111
src/collections/_posts/2020-10-15-mounting-google-drives.md
Normal file
111
src/collections/_posts/2020-10-15-mounting-google-drives.md
Normal file
@ -0,0 +1,111 @@
|
||||
---
|
||||
layout: default
|
||||
title: Mounting Google Drive accounts as network drives
|
||||
description: Easy-to-use Google Drive integration for Linux using rclone
|
||||
date: 2020-10-15
|
||||
written: 2020-09-22
|
||||
tags:
|
||||
- linux
|
||||
- workflow
|
||||
- google
|
||||
- walkthrough
|
||||
extra:
|
||||
excerpt: 'I can never get the Google Drive webapp to load quickly when I need it to.
|
||||
My solution: use some command-line magic to mount my drives directly to my laptop''s
|
||||
filesystem.'
|
||||
redirect_from:
|
||||
- /post/XcaM0k24/
|
||||
- /XcaM0k24/
|
||||
aliases:
|
||||
- /blog/2020/10/15/mounting-google-drives
|
||||
- /blog/mounting-google-drives
|
||||
---
|
||||
|
||||
When sharing files, I use three main services. I use [Firefox Send](https://en.wikipedia.org/wiki/Firefox_Send) and [KeybaseFS](https://book.keybase.io/docs/files) to share one-off and large files with friends, and I use [Google Drive](https://drive.google.com) to store some personal files, and for everything school-related (I don't get a choice about this). For the first two services, sharing a file is as simple as calling [`ffsend`](https://github.com/timvisee/ffsend) or moving a local file into my kbfs mountpoint, and I am done. Google Drive, on the other hand, the process isn't as easy. While some Linux distributions have Google Drive integration out of the box (I miss daily-driving [ChromiumOS](https://www.chromium.org/chromium-os)), Linux users generally have to go to `drive.google.com`, and deal with the Google Drive webapp. Not sure if this is an "only me" problem, but whenever I need to quickly make a change to a document through the webapp, It decides to stop working.
|
||||
|
||||
I really like the Keybase approach of mounting remote storage as a "network drive" on my laptop, and wanted to do something similar for Google Drive. This is where a great tool called [`rclone`](https://rclone.org) comes in to play. Rclone is a very easy-to-use command-line application for working with cloud storage. I originally learned about it when I used to host this website on [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces/) a few years ago. Out of the box, Rclone supports [many cloud providers](https://rclone.org/#providers), including Google Drive!
|
||||
|
||||
## Setting up Rclone for use with Google Drive
|
||||
|
||||
Now for the fun part, to get started with Rclone and Google Drive on your computer, you must first install Rclone. I am going to assume you are using a Linux-based operating system here, but with some slight tweaking, this works on BSD and Windows too!
|
||||
|
||||
```sh
|
||||
# Install Rclone with the automated installer
|
||||
curl https://rclone.org/install.sh | sudo bash
|
||||
```
|
||||
|
||||
Once Rclone is installed, you need to hop on over to the [Google Cloud Developer Console](https://console.developers.google.com/), and create a new project. Under the *ENABLE APIS AND SERVICES* section, search for, and enable the `Google Drive API`. This will expose an API to your Google Drive, and let programs interact with the files (if setting up multiple accounts, you only need to enable the API on one of them). Click the *Credentials* tab in the left-side panel, then *Create credentials*. This will open a panel letting you set up access to your new API.
|
||||
|
||||
With the panel open, click *CONFIGURE CONSENT SCREEN*, *External*, then *CREATE*. Enter `rclone` as the application name, and save it. You now have set up one of those "sign in with Google" screens for yourself. Clicking the *Credentials* tab again will bring you to an area where you can generate the needed API keys for Rclone.
|
||||
|
||||
Click *+ CREATE CREDENTIALS* at the top of the panel, and select *OAuth client ID*. Set the application type to *Desktop app*, and finally, press *Create*. You will now be shown the needed info to link Rclone to your account(s).
|
||||
|
||||
<div class="center" markdown="1">
|
||||
*Note: This API project is **not verified** by Google.*<br>
|
||||
*This means that you will be greeted with a scary warning when logging in the first time. Just ignore it.*
|
||||
</div>
|
||||
|
||||
Back in the terminal, we can run `rclone config` to set up a configuration for Google Drive. You will be prompted with many options. Use the following:
|
||||
```sh
|
||||
# > rclone config
|
||||
|
||||
# Create a new config
|
||||
n) New remote
|
||||
|
||||
# Set a name
|
||||
name> my_drive
|
||||
|
||||
# Choose a storage type
|
||||
Storage> drive
|
||||
|
||||
# You will be asked for a client ID and secret. These are the strings we just generated
|
||||
...
|
||||
|
||||
# Set the scope to allow Rclone access to your files
|
||||
scope> 1
|
||||
|
||||
# Select the default option for everything until asked if you want to use "auto config"
|
||||
# When asked, say yes
|
||||
auto_config> y
|
||||
|
||||
# Set team drive to no
|
||||
team_drive> n
|
||||
|
||||
# Verify the information, then say yes
|
||||
ok> y
|
||||
```
|
||||
|
||||
Almost done. You need to run `rclone ls my_drive:` (the colon is important). This will probably ask you to go to a link, and enable an API. Do so.
|
||||
|
||||
Your Google Drive can now be mounted by running the following (feel free to change the paths to whatever you want)
|
||||
|
||||
```
|
||||
mkdir -p ~/google_drive
|
||||
rclone mount my_drive: ~/google_drive --vfs-cache-mode writes
|
||||
```
|
||||
|
||||
## Starting Rclone on boot
|
||||
|
||||
You probably don't want to run an `rclone` command every time you start your computer. This can be solved in one of two ways
|
||||
|
||||
### For i3wm users
|
||||
|
||||
On `i3wm`, just add the following line to `~/.config/i3/config`:
|
||||
|
||||
```sh
|
||||
exec --no-startup-id rclone mount my_drive: /home/<user>/google_drive --vfs-cache-mode writes
|
||||
```
|
||||
|
||||
*Make sure to replace `<user>` with your username.*
|
||||
|
||||
Keep in mind, `exec` commands are not run when reloading `i3` with <kbd>Mod</kbd>+<kbd>Shift</kbd>+<kbd>r</kbd>. You must log out (<kbd>Mod</kbd>+<kbd>Shift</kbd>+<kbd>e</kbd>), and back in again.
|
||||
|
||||
### For Ubuntu / Debian-based users
|
||||
|
||||
In pretty much any Debian-based system, you can edit `/etc/rc.local` (using `sudo`), and add the following line right before `exit 0`:
|
||||
|
||||
```sh
|
||||
rclone mount my_drive: /home/<user>/google_drive --vfs-cache-mode writes
|
||||
```
|
||||
|
||||
*Make sure to replace `<user>` with your username.*
|
128
src/collections/_posts/2020-10-24-corepack-development.md
Normal file
128
src/collections/_posts/2020-10-24-corepack-development.md
Normal file
@ -0,0 +1,128 @@
|
||||
---
|
||||
layout: default
|
||||
title: Using Bazel to create Minecraft modpacks
|
||||
description: An overview of how I automated the build process for CorePack
|
||||
date: 2020-10-24
|
||||
written: 2020-09-27
|
||||
tags: bazel workflow git minecraft
|
||||
extra:
|
||||
excerpt: I decided to modernize my system for producing builds of my personal Minecraft
|
||||
modpack using the Bazel buildsystem.
|
||||
redirect_from:
|
||||
- /post/XlA00k24/
|
||||
- /XlA00k24/
|
||||
aliases:
|
||||
- /blog/2020/10/24/corepack-development
|
||||
- /blog/corepack-development
|
||||
---
|
||||
|
||||
*All content of this post is based around the work I did [here](https://github.com/Ewpratten/corepack)*
|
||||
|
||||
Back in [2012](https://minecraft.gamepedia.com/Java_Edition_1.2.5), I got in to Minecraft mod development, and soon after, put together an almost-vanilla client-side modpack for myself that mainly contained rendering, UI, and quality-of-life tweaks. While this modpack never got published, or was even given a name, I kept maintaining it for years until I eventually stopped playing Minecraft just before the release of Minecraft [`1.9`](https://minecraft.gamepedia.com/Java_Edition_1.9) (in 2016). I had gotten so used to the features of this modpack, that playing truly vanilla Minecraft didn't feel correct.
|
||||
|
||||
Recently, a few friends invited me to join their private Minecraft server, and despite having not touched the game for around four years, I decided to join. This was a bit of a mistake on their part, as they now get the pleasure of someone who used to main [`1.6.4`](https://minecraft.gamepedia.com/Java_Edition_1.6.4) constantly walking up to things and asking *"What is this and how does it work?"*. I have started to get used to the very weird new collection of blocks, completely reworked command system, over-complicated combat system, and a new rendering system that makes everything "look wrong".
|
||||
|
||||
One major thing was still missing though, *where was my modpack?* I set out to rebuild my good old modpack (and finally give it a name, *CorePack*). Not much has changed, most of the same rendering and UI mods are back, along with the same [GLSL](https://en.wikipedia.org/wiki/OpenGL_Shading_Language) shaders, and similar textures. Although, I did decide to take a *"major step"* and switch from the [Forge Mod Loader](http://files.minecraftforge.net/) to the [Fabric Loader](https://fabricmc.net/), since I prefer Fabric's API.
|
||||
|
||||
## Curseforge & Bazel
|
||||
|
||||
I don't remember [Curseforge](https://curseforge.com/) existing back when I used to play regularly. It is a huge improvement over the [PlanetMinecraft](https://www.planetminecraft.com/) forums, as curse provides a clean way to access data about published Minecraft mods, and even has an API! Luckily, since I switched the modpack to Fabric, every mod I was looking for was available through curse (although, it seems [NEI](https://www.curseforge.com/minecraft/mc-mods/notenoughitems) is a thing of the past).
|
||||
|
||||
My main goal for the updated version of CorePack was to design it in such a way I could make a CI pipeline generate new releases for me when mods are updated. This requires programmatically pulling information about mods, and their JAR files using a buildsystem script. Since this project involves working with a large amount of data from various external sources, I once-again chose to use [Bazel](https://bazel.build), a buildsystem that excels at these kinds of projects.
|
||||
|
||||
While Curseforge provides a very easy to use API for working with mod data, @Wyn-Price (a fellow mod developer) has put together an amazing project called [Curse Maven](https://www.cursemaven.com/) that I decided to use instead. Curse Maven is a serverless API that acts much like my [Ultralight project](@/blog/2020-09-17-Ultralight-writeup.md). Any request for an artifact to Curse Maven will be redirected, and served from the [Curseforge Maven server](https://authors.curseforge.com/knowledge-base/projects/529-api#Maven) without the need for me to figure out the long-form artifact identifiers used internally by curse.
|
||||
|
||||
Curse Maven makes loading a mod (in this case, [`fabric-api`](https://www.curseforge.com/minecraft/mc-mods/fabric-api)) into Bazel as easy as:
|
||||
|
||||
```python
|
||||
# WORKSPACE
|
||||
# Load bazel_maven_repository
|
||||
http_archive(
|
||||
name = "maven_repository_rules",
|
||||
strip_prefix = "bazel_maven_repository-1.2.0",
|
||||
type = "zip",
|
||||
urls = ["https://github.com/square/bazel_maven_repository/archive/1.2.0.zip"],
|
||||
)
|
||||
load("@maven_repository_rules//maven:maven.bzl", "maven_repository_specification")
|
||||
load("@maven_repository_rules//maven:jetifier.bzl", "jetifier_init")
|
||||
jetifier_init()
|
||||
|
||||
# Declare any mods as maven artifacts
|
||||
maven_repository_specification(
|
||||
name = "maven",
|
||||
artifacts = {
|
||||
"curse.maven:fabric-api:3049174": {"insecure": True}
|
||||
},
|
||||
repository_urls = [
|
||||
"https://www.cursemaven.com",
|
||||
],
|
||||
)
|
||||
```
|
||||
|
||||
The above snippet uses a Bazel ruleset developed by [Square, Inc.](https://squareup.com/ca/en) called [`bazel_maven_repository`](https://github.com/square/bazel_maven_repository).
|
||||
|
||||
## Modpack configuration
|
||||
|
||||
Since my pack is designed for use with [MultiMC](https://multimc.org/), two sets of configuration files are needed. The first set tells MultiMC which versions of [LWJGL](https://www.lwjgl.org/), Minecraft, and Fabric to use, and the second set are the in-game config files. Many of these files contain information that I would like to modify from Bazel during the modpack build step. Luckily, the [Starlark](https://docs.bazel.build/versions/master/skylark/language.html) core library comes with an action called [`expand_template`](https://docs.bazel.build/versions/2.0.0/skylark/lib/actions.html#expand_template). `expand_template` is basically a find-and-replace tool that will perform substitutions on files. Since this is an action, and not a rule, it must be wrapped with a small rule declaration:
|
||||
|
||||
```python
|
||||
# tools/template.bzl
|
||||
def expand_template_impl(ctx):
|
||||
ctx.actions.expand_template(
|
||||
template = ctx.file.template,
|
||||
output = ctx.outputs.out,
|
||||
substitutions = {
|
||||
k: ctx.expand_location(v, ctx.attr.data)
|
||||
for k, v in ctx.attr.substitutions.items()
|
||||
},
|
||||
is_executable = ctx.attr.is_executable,
|
||||
)
|
||||
|
||||
expand_template = rule(
|
||||
implementation = expand_template_impl,
|
||||
attrs = {
|
||||
"template": attr.label(mandatory = True, allow_single_file = True),
|
||||
"substitutions": attr.string_dict(mandatory = True),
|
||||
"out": attr.output(mandatory = True),
|
||||
"is_executable": attr.bool(default = False, mandatory = False),
|
||||
"data": attr.label_list(allow_files = True),
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
In a `BUILD` file, template rules can be defined as follows:
|
||||
|
||||
```python
|
||||
# BUILD
|
||||
load("//tools:template.bzl", "expand_template")
|
||||
|
||||
expand_template(
|
||||
name = "my_config",
|
||||
template = "config.json.in",
|
||||
out = "config.json",
|
||||
substitutions = {
|
||||
"TEST_SUBS": "hello world"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
Using the following example file as `config.json.in`, this rule would have the following effect:
|
||||
|
||||
```js
|
||||
// config.json.in
|
||||
{
|
||||
"key": "TEST_SUBS"
|
||||
}
|
||||
|
||||
// config.json
|
||||
{
|
||||
"key": "hello world"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Packaging
|
||||
|
||||
Once mods are loaded, and configuration files are defined in the buildsystem, I use a large number of [`filegroup`](https://docs.bazel.build/versions/master/be/general.html#filegroup) and [`genrule`](https://docs.bazel.build/versions/master/be/general.html#genrule) rules to set up a directory hierarchy in the workspace, and wrap everything in a call to [`zipper`](https://sourcegraph.com/github.com/v2ray/ext/-/blob/bazel/zip.bzl#L23:25) to package the modpack into a ZIP file.
|
||||
|
||||
Finally, I use [GitHub Actions](https://github.com/features/actions) to automatically run the buildscript, and publish the resulting MultiMC instance zip to the [GitHub repo](https://github.com/Ewpratten/corepack) for this project.
|
75
src/collections/_posts/2020-11-06-vortex-core.md
Normal file
75
src/collections/_posts/2020-11-06-vortex-core.md
Normal file
@ -0,0 +1,75 @@
|
||||
---
|
||||
layout: default
|
||||
title: 'My first mechanical keyboard: The Vortex Core'
|
||||
description: Just the right amount of obscure
|
||||
date: 2020-11-06
|
||||
written: 2020-09-28
|
||||
tags:
|
||||
- keyboards
|
||||
- workflow
|
||||
- product
|
||||
extra:
|
||||
excerpt: I recently purchased my first mechanical keyboard, and decided to go "all
|
||||
in" with a 40% layout.
|
||||
redirect_from:
|
||||
- /post/XlPl0k24/
|
||||
- /XlPl0k24/
|
||||
aliases:
|
||||
- /blog/2020/11/06/vortex-core
|
||||
- /blog/vortex-core
|
||||
---
|
||||
|
||||
About a month ago, I decided to buy myself a mechanical keyboard. I have always been a huge fan of membrane / laptop keyboards. My current laptop (the Lenovo T480) has a very nice feel to its keyboard, and my previous laptop (the Acer R11) had the best keyboard I have ever used. The switch to mechanical wasn't my first choice, although I was open to trying something new, so didn't see it as a negative. Ever since adding another monitor to my setup, I haven't had enough room on my desk to fit a keyboard. This generally is not a problem since I mainly use my laptop, but I occasionally need to use my desktop for rendering work, which requires a separate keyboard.
|
||||
|
||||
I began to look for keyboards that could fit in the little space in front of my laptop, and stumbled across [a video](https://www.youtube.com/watch?v=ofXOu7zK9IY) from one of my favorite YouTube creators, [Wolfgang](https://www.youtube.com/c/WolfgangsChannel/featured) on the [Niu Mini](https://kbdfans.com/products/niu-mini-40-diy-kit), which is a 40% keyboard (meaning it has 40% of the keys of a full size layout). The heavy use of keybindings to get work done on such a small keyboard interested me a lot, and I almost picked up a Niu Mini for myself, although ended up not getting it because I decided I wasn't quite ready to learn how to type on an [ortholinear](https://blog.roastpotatoes.co/review/2015/09/20/ortholinear-experience-atomic/) layout, while needing to learn keybindings at the same time.
|
||||
|
||||
Instead of the Niu Mini, I ended up getting myself the cheaper [Vortex Core](https://mechanicalkeyboards.com/shop/index.php?l=product_detail&p=3550). The core, made by the same company that produces the well-known [POK3R](https://mechanicalkeyboards.com/shop/index.php?l=product_detail&p=3527), is a programmable 40% with a staggered layout.
|
||||
|
||||
## Overall build
|
||||
|
||||
The Vortex Core is built very nicely, I chose mine with the [Cherry MX Brown](https://www.cherrymx.de/en/mx-original/mx-brown.html) switches, since I dislike overly clicky keyboards, and I have had no problems with noise. The keys also feel very nice, and are effortless to type with. Interestingly, my keyboard shipped with an extra "Windows" key in place of a function key, which on a keyboard that makes heavy use of function keys, was a bit annoying. Not a huge deal though, since I just know what the key does, and I don't spend much time looking at the keycaps anyways.
|
||||
|
||||
That being said, since the keyboard has so many shortcuts and combinations to get things done, I really like the fact that the core comes with color-coded keycaps that tell you what they do.
|
||||
|
||||
The keyboard's baseplate is made of aluminum, and is CNC-cut, so it both looks and feels very nice. For a keyboard that I can wrap my (admittedly large) hand around, it is fairly heavy too (I seem to remember the FedEx shipment coming in at around 3lbs). In this case, heavy is not at all a bad thing. The weight of this keyboard makes it feel... expensive. Also, it never feels like the board is sliding away when I'm typing.
|
||||
|
||||

|
||||
|
||||
One downside though, in terms of connectivity, the keyboard unfortunately uses USB micro connector instead of the newer (and nicer) USB type C connector. As someone who connects his life with USB-C, I am not the biggest fan of this choice, but at least I had a right-angle USB-micro cable lying around that I can use with it. Alongside the USB-micro connection, removing the backplate will reveal a [JTAG](https://en.wikipedia.org/wiki/JTAG) connector that allows you to flash custom firmware to the keyboard if you want. @ChaoticEnigma has forked the popular [QMK](https://github.com/qmk/qmk_firmware) keyboard firmware as [`qmk_pok3r`](https://github.com/pok3r-custom/qmk_pok3r), and added support for many Vortex boards including the Core, if you are looking to load something more custom.
|
||||
|
||||
## Keybindings
|
||||
|
||||
I have been talking non-stop about this keyboard's keybindings. So, *what's up with that?*
|
||||
|
||||
Keybindings are very common on 40% keyboards, since many keys you have probably grown to love simply are not on the keyboard anymore. No <kbd>F</kbd> keys, no number keys, no arrow keys, no symbols, and no quotations either. For this quick overview, I will explain this for a Vortex Core keyboard *without* any custom programming.
|
||||
|
||||
Let's say you wanted to type the number `5`. On the core, this is done by pressing <kbd>fn1</kbd>+<kbd>F</kbd> (there are three function keys on the core. <kbd>fn</kbd>, <kbd>fn1</kbd>, and <kbd>pn</kbd>). While this might be a bit confusing at first, it is a fairly simple system to learn, and the color-coded keycap markings make the learning process super easy.
|
||||
|
||||
|
||||
## Programming
|
||||
|
||||
There are three main things I wanted to do immediately after getting my core:
|
||||
|
||||
- Remap <kbd>Caps Lock</kbd> to <kbd>Tab</kbd>
|
||||
- Switch the <kbd>Win</kbd> and <kbd>Alt</kbd> keys to match the layout of my Thinkpad
|
||||
- Remap the arrow keys to [vim keys](https://hea-www.harvard.edu/~fine/Tech/vi.html)
|
||||
|
||||
The first could be done simply by performing a firmware upgrade to the latest version for the core. Setting custom keybindings on the other hand, requires switching the core's firmware to the `MPC` version.
|
||||
|
||||
This process unfortunately requires access to a computer that runs Windows (or VirtualBox). On windows, the setup process is really quite easy. You go to [this link](http://www.vortexgear.tw/db/upload/webdata4/6vortex_201861271445393.exe), which will download the firmware upgrade tool. Running the tool, and plugging in the keyboard will provide you with some options.
|
||||
|
||||

|
||||
|
||||
The "bin group" selection provides two options. Selecting `Core by MPC` will flash the re-programmable firmware to the keyboard, and the other option will restore the keyboard to factory firmware.
|
||||
|
||||
Vortex provides a programming tool, but I am not a huge fan of it. I plan to write a Java app that can program the keyboard (and load saved profiles from it), but for now, I am using a great tool made by @tsfreddie called [Much Programming Core](https://tsfreddie.github.io/much-programming-core/). This tool allows you to configure keybindings and remap keys through his website, and there are easy-to-follow instructions on how to download the correct file, and flash your keyboard.
|
||||
|
||||

|
||||
|
||||
Speaking of flashing the board, with the MPC firmware, the process for loading custom keybindings (which works on any OS) is really easy and simple. Just unplug the keyboard, then plug it back in while holding <kbd>fn</kbd>+<kbd>D</kbd>. This will cause the keyboard to mount as a USB drive, and you can drop configuration files on to it.
|
||||
|
||||
## Do I recommend it?
|
||||
|
||||
Well, that depends. If you are the type of person to customize everything for maximum efficiency, go for it! The Vortex Core is a very nice keyboard with more configurability than I can wrap my head around (even if you need a third party tool to do so). If you just want something simple, stick to a 60% keyboard. The lack of numbers on the core drives many people crazy.
|
||||
|
||||
For programmers: you basically need to remap your keys. Most common keys (brackets, quotes, operators, ...) are hidden behind one or two function keys, and the learning curve might hurt for the first week or so.
|
50
src/collections/_posts/2020-11-21-minecraft-irc.md
Normal file
50
src/collections/_posts/2020-11-21-minecraft-irc.md
Normal file
@ -0,0 +1,50 @@
|
||||
---
|
||||
layout: default
|
||||
title: Connecting to a Minecraft server over IRC
|
||||
description: For server administration, or just chatting with friends
|
||||
date: 2020-11-21
|
||||
written: 2020-10-25
|
||||
tags: minecraft project irc
|
||||
extra:
|
||||
excerpt: This post outlines the process of writing a custom IRC server that can
|
||||
bridge between your favorite IRC client, and any Minecraft server
|
||||
redirect_from:
|
||||
- /post/lls5jkd4/
|
||||
- /lls5jkd4/
|
||||
aliases:
|
||||
- /blog/2020/11/21/minecraft-irc
|
||||
- /blog/minecraft-irc
|
||||
---
|
||||
|
||||
As I talked about in my post [about Minecraft modpack development](@/blog/2020-10-24-CorePack-Development.md), I got back in to playing Minecraft earlier this year. I primairly play on a server full of friends, where the server owner has [dynmap](https://github.com/webbukkit/dynmap) installed. Dynmap is a handy tool that provides a near-real-time overview of the minecraft world in the form of a webapp. I always keep Dynmap open on my laptop so I can chat with whoever is online, and see whats being worked on.
|
||||
|
||||
While dynmap has a built-in chat log, and the ability to send chats, the incoming chat messages do not persist, and the outgoing chat messages don't always show your in-game username (but instead, your public IP address). Since I always have an IRC client open, I figured that making use of my IRC client to generate a persistent chat log in the background would be a good solution. Unfortunately, I could not find anyone who has ever built a `Minecraft <-> IRC` bridge. Thus my project, [chatster](https://github.com/Ewpratten/chatster), was born.
|
||||
|
||||
The most basic IRC server consists of a TCP socket, and only 7 message handlers:
|
||||
|
||||
| Message Type | Description |
|
||||
|--------------|--------------------------------------------------|
|
||||
| `NICK` | Handles a user setting their nickname |
|
||||
| `USER` | Handles a user setting their identity / username |
|
||||
| `PASS` | Handles a user authenticating with the server |
|
||||
| `PING` | A simple ping-pong system |
|
||||
| `JOIN` | Handles a user joining a channel |
|
||||
| `QUIT` | Handles a user leaving a channel |
|
||||
| `PRIVMSG` | Handles a user sending a message |
|
||||
|
||||
On the Minecraft side, the following subset of the [in-game protocol](https://wiki.vg/Protocol) must be implemented (I just used the [`pyCraft`](https://github.com/ammaraskar/pyCraft) library for this):
|
||||
|
||||
- User authentication
|
||||
- Receiving [`clientbound.play.ChatMessage`](https://wiki.vg/Protocol#Chat_Message_.28clientbound.29) packets
|
||||
- Sending [`serverbound.play.ChatMessage`](https://wiki.vg/Protocol#Chat_Message_.28serverbound.29) packets
|
||||
|
||||
|
||||
The whole idea of chatster is that a user connects to the IRC server using their [Mojang account](https://account.mojang.com/) email and password at their IRC nickname, and server password. The server temporarily stores these values in memory.
|
||||
|
||||
Connecting to a server is done via specific IRC channel names. If you wanted to connect to `mc.example.com` on port `12345`, you would issue the following IRC command:
|
||||
|
||||
```
|
||||
/JOIN #mc.example.com:12345
|
||||
```
|
||||
|
||||
Upon channel join, the server opens a socket to the specified Minecraft server, and relays chat messages (along with their sender) to both Minecraft and IRC. This means that ingame users show up in your IRC user list, and you can send commands and chats to the game.
|
107
src/collections/_posts/2020-12-19-vanilla-plus-mods.md
Normal file
107
src/collections/_posts/2020-12-19-vanilla-plus-mods.md
Normal file
@ -0,0 +1,107 @@
|
||||
---
|
||||
layout: default
|
||||
title: How I have tweaked my Minecraft client to be 'just right'
|
||||
description: Pushing the boundaries of a vanilla game, while being able to play on
|
||||
un-modified servers
|
||||
date: 2020-12-19
|
||||
written: 2020-12-04
|
||||
tags:
|
||||
- project
|
||||
- minecraft
|
||||
extra:
|
||||
excerpt: Over the past 10 years, I have been building the perfect Minecraft experience
|
||||
for myself. This post shares the collection of mods I run, and why I use them.
|
||||
redirect_from:
|
||||
- /post/gas49g43/
|
||||
- /gas49g43/
|
||||
aliases:
|
||||
- /blog/2020/12/19/vanilla-plus-mods
|
||||
- /blog/vanilla-plus-mods
|
||||
---
|
||||
|
||||
## The base game
|
||||
|
||||
Starting out at the base game. I like to keep this fairly up-to-date. Right now, my base game is version [`1.16.4`](https://minecraft.gamepedia.com/Java_Edition_1.16.4). Along with the base game, my game launcher of choice, [MultiMC](https://multimc.org/), allows using a custom [LWJGL](https://www.lwjgl.org/) version. I choose to use version `3.2.2`, as it is the most stable for me.
|
||||
|
||||
## Mod loading
|
||||
|
||||
Anyone who has played Minecraft for long enough will remember back when installing a mod involved opening up the game JAR, and dropping new class files into it. Mod loaders essentially still do this, but they provide a much cleaner system for you, the user. For years, I used the [Forge Mod Loader](http://files.minecraftforge.net/), but recently switched to the [Fabric Mod Loader](https://fabricmc.net/), as, in my opinion, the [FabricMC documentation](https://fabricmc.net/wiki/doku.php) is much nicer to deal with.
|
||||
|
||||
Unlike Forge, Fabric generally requires a helper mod to be installed called the [Fabric API](https://www.curseforge.com/minecraft/mc-mods/fabric-api). This exists to provide user-friendly mappings to Minecraft code for mod developers.
|
||||
|
||||
In terms of versions, I am running Fabric Loader version `0.10.6+build.214`, and API version [`0.22.1+build.409-1.16`](https://github.com/FabricMC/fabric/releases/tag/0.22.1%2Bbuild.409-1.16).
|
||||
|
||||
## Network Protocol Translation
|
||||
|
||||
One of my least favorite parts of playing on multiple multiplayer servers is the need to constantly switch Minecraft versions to accommodate every server version I am playing on. For example, the server I talk about in my post about [Minecraft chat over IRC](@/blog/2020-11-21-Minecraft-IRC.md) is running version `1.16.3`. You may have played on some high-end servers like [Hypixel](https://hypixel.net/) or [MCCTF](https://www.brawl.com/front/), where you can connect with any client version you want. These servers are both running [Network Protocol Translation](https://en.wikipedia.org/wiki/Protocol_converter) plugins that will convert between Minecraft server protocol versions as packets are sent and received.
|
||||
|
||||
This can also be set up on the client instead of the server, allowing a single client to connect to multiple server versions. I run both the [Viaversion](https://github.com/ViaVersion/ViaVersion) and [Multiconnect](https://github.com/Earthcomputer/multiconnect) mods, which together allow my `1.16.4` client to play on servers all the way down to `1.8.0`.
|
||||
|
||||
## Rendering
|
||||
|
||||
On the rendering side of the game, I run a few specialized mods to improve or replace various functions of Minectaft's built-in game renderer. Starting with the largest change, I use the [Sodium](https://github.com/jellysquid3/sodium-fabric) renderer, which includes a large number of rendering improvements, and opens up some extra customizability to the user.
|
||||
|
||||
<div class="center" markdown="1">
|
||||
|
||||

|
||||
|
||||
</div>
|
||||
|
||||
The developer of Sodium, @jellysquid3, has a few other rendering or rendering-related projects I use. Mainly: [Phosphor](https://github.com/jellysquid3/phosphor-fabric), which makes large improvements to the game lighting engine, and [Lithium](https://github.com/jellysquid3/lithium-fabric) which makes all-around improvements to the game.
|
||||
|
||||
Speaking of lighting, I also run [Lamb Dynamic Lights](https://github.com/LambdAurora/LambDynamicLights), which allows you to illuminate the area around you when holding a torch (very helpful for mining). Anyone who remembers the old [Not Enough Items](https://tekkitclassic.fandom.com/wiki/Not_Enough_Items) mod, will remember that pressing <kbd>F7</kbd> would bring up an overlay for viewing the light level of any block. I now use the [Light Overlay](https://www.curseforge.com/minecraft/mc-mods/light-overlay) mod to do the same thing.
|
||||
|
||||
In terms of "nice to have" rendering features, I have [OKZoomer](https://github.com/joaoh1/OkZoomer) to give me Optifine-style camera zoom. I don't use Optifine anymore, but am a donator, so, to get my donator cape back on my client, I have the [Minecraft Capes](https://www.curseforge.com/minecraft/mc-mods/minecraftcapes-mod) mod installed. Continuing to add small features to the game from Optifine, I use [Connected Glass](https://www.curseforge.com/minecraft/mc-mods/connected-glass) to add connected textures, [Diagonal Panes](https://www.curseforge.com/minecraft/mc-mods/diagonal-panes) to render glass panes and iron bars on diagonal angles, and [Lambda Better Grass](https://www.curseforge.com/minecraft/mc-mods/lambdabettergrass) to connect grass textures together. For no particular reason, I also use [Better Dropped Items](https://www.curseforge.com/minecraft/mc-mods/better-dropped-items) to render dropped items *better*.
|
||||
|
||||
Finally, a nice rendering mod to have is [Dynamic FPS](https://github.com/juliand665/Dynamic-FPS), which essentially stops game rendering when window focus is lost. This just improves your computer's performance when running Minecraft in the background.
|
||||
|
||||
## Audio engine
|
||||
|
||||
Not many people know that mods exist to replace or improve Minecraft's audio engine. I quite enjoy using these mods, as the game becomes significantly more immersive. I use the [Dynamic Sound Filters](https://www.curseforge.com/minecraft/mc-mods/dynamic-sound-filters) mod to add reverb to caves and the nether (the nether becomes quite scary when game sounds are turned up). Along with Dynamic Sound Filters, I also use a fairly ridiculous mod called [Presence Footsteps](https://www.curseforge.com/minecraft/mc-mods/presence-footsteps). Presence Footsteps is a mod that detects the block each of your feet is standing on, and plays the appropriate sound. This means that walking on the line between two different blocks will play the block steps sounds alternating with each other. This mod also works with 4 legged mobs like horses, and even 8 legged mods.
|
||||
|
||||
Not mods, but audio-related resource packs: [Better Sounds](https://www.curseforge.com/minecraft/texture-packs/bettersounds) improves upon many of Minecraft's sound resources and notably "makes spiders sound creepy". Also, the [Orchestra Soundpack](https://www.curseforge.com/minecraft/texture-packs/orchestra-soundpack) replaces many of [Daniel Rosenfeld](https://en.wikipedia.org/wiki/C418)'s great game soundtracks with even better orchestral soundtracks composed by [Andreas Zoeller](https://www.youtube.com/user/andreaszoellermusic).
|
||||
|
||||
## UI tweaks
|
||||
|
||||
I have a lot of UI tweak mods installed to provide me with the "perfect" game HUD.
|
||||
|
||||
Starting with hotbar modifications, I use [AppleSkin](https://www.curseforge.com/minecraft/mc-mods/appleskin) to show the nutritional value of whatever food item I am holding, and [Giselbaer's Durability Viewer](https://www.curseforge.com/minecraft/mc-mods/giselbaers-durability-viewer) to show me the durability percentage of my armor, and handheld items.
|
||||
|
||||
In my inventory, I use [Roughly Enough Items](https://github.com/shedaniel/RoughlyEnoughItems) to provide crafting recipe lookup, a list of every item in the game, usages for items, and more. I also use [Roughly Enough Resources](https://www.curseforge.com/minecraft/mc-mods/roughly-enough-resources) as a plugin for Roughly Enough Items to provide extra information about mob loot and item / ore rarity in the world. When dealing with shulker boxes, it is annoying to constantly be placing them down to check their contents. For this, I use the [Shulker Box Tooltip](https://www.curseforge.com/minecraft/mc-mods/shulkerboxtooltip) mod to show a box's contents when I hover over it, and the [Shulker Box GUIs](https://www.curseforge.com/minecraft/texture-packs/shulker-box-guis) resource pack to color-code the shulker box GUI. Despite "requiring Optifine", this pack does not actually require Optifine to work.
|
||||
|
||||
|
||||
<div class="center" markdown="1">
|
||||
|
||||

|
||||
|
||||
</div>
|
||||
|
||||
In terms of HUD "extras", I use [Here's What You're Looking At](https://www.curseforge.com/minecraft/mc-mods/hwyla) to show basic information about the block I am looking at. This is very helpful for me, as I am still learning what all the new `1.9+` blocks are. I also extend HWYLA with [Hwyla Addon Horse Info](https://www.curseforge.com/minecraft/mc-mods/hwyla-addon-horse-info) to show me the stats of any horse I look at, and [cAn i MiNe thIS bLOCk?](https://www.curseforge.com/minecraft/mc-mods/can-i-mine-this-block) to tell me the needed tool to harvest a specific block. I also use [Game Info](https://www.curseforge.com/minecraft/mc-mods/gameinfo) to tell me the world time in the upper left of my screen.
|
||||
|
||||
In the world, I use [Orderly](https://www.curseforge.com/minecraft/mc-mods/orderly/) to show the health of mobs above their heads, and [Name Pain](https://www.curseforge.com/minecraft/mc-mods/name-pain), which will give player's names a red tint when they are low on health.
|
||||
|
||||
## Utility
|
||||
|
||||
There are a few small mods that I have installed to provide some nice-to-have information in game, like [Chat Heads](https://www.curseforge.com/minecraft/mc-mods/chat-heads), which shows a player's face beside their chat messages, [AuthMe](https://www.curseforge.com/minecraft/mc-mods/auth-me), which allows you to switch accounts without restarting the game, [Anti-Ghost](https://www.curseforge.com/minecraft/mc-mods/antighost), which fixes Minecraft's ghost block problem, [BetterF3](https://www.curseforge.com/minecraft/mc-mods/betterf3), which improves the game's <kbd>F3</kbd> screen, [Controlling](https://www.curseforge.com/minecraft/mc-mods/controlling-for-fabric), which allows you to search through the game's keybinds in an easier way, [Craft Presence](https://www.curseforge.com/minecraft/mc-mods/craftpresence), which provides highly-customizable [Discord Rich Presence](https://discord.com/rich-presence) data, [Custom Selection Box](https://www.curseforge.com/minecraft/mc-mods/custom-selection-box-port), which makes the block you are looking at more distinct, [Mod Menu](https://www.curseforge.com/minecraft/mc-mods/modmenu), which is used by many mods to provide settings screens, [Not Enough Crashes](https://www.curseforge.com/minecraft/mc-mods/not-enough-crashes), which just brings you back to the title screen if something stops working, instead of closing the game, and [Path Suggestion](https://www.curseforge.com/minecraft/mc-mods/pathsuggestion), which improves Minecraft command auto-complete.
|
||||
|
||||
## World map
|
||||
|
||||
I hate writing down coordinates of various things, so I use Xaero's [Minimap](https://www.curseforge.com/minecraft/mc-mods/xaeros-minimap) and [World Map](https://www.curseforge.com/minecraft/mc-mods/xaeros-world-map) mods. These both provide in-world waypoints, and generate a map of everywhere you travel in the world.
|
||||
|
||||
<div class="center" markdown="1">
|
||||
|
||||

|
||||
|
||||
</div>
|
||||
|
||||
## Building utilities
|
||||
|
||||
I spend a lot of time programmatically editing and copying builds around between worlds and servers. I do this to make redstone templates, generate build platforms, and create Minecraft-based voxel art over on [my Instagram](https://www.instagram.com/evanpratten/) page.
|
||||
|
||||
To do this, I use [WorldEdit](https://www.curseforge.com/minecraft/mc-mods/worldedit) for most of the heavy lifting, [Euclid](https://www.curseforge.com/minecraft/mc-mods/euclid) to show me my WorldEdit selections as I create them, and [Litematica](https://www.curseforge.com/minecraft/mc-mods/litematica/) to copy builds from servers to singleplayer worlds (since WorldEdit only works in singleplayer).
|
||||
|
||||
## World generation
|
||||
|
||||
Finally, I use [Terrestria](https://www.curseforge.com/minecraft/mc-mods/terrestria), [Traverse](https://www.curseforge.com/minecraft/mc-mods/traverse), [Cinderscapes](https://www.curseforge.com/minecraft/mc-mods/cinderscapes) and [Overworld Two](https://www.curseforge.com/minecraft/mc-mods/overworld-two/) to improve terrain generation, and "spice up" my worlds.
|
||||
|
||||
*NOTE: The first three of these mods introduce new blocks to the game, but do not cause issues in vanilla multiplayer games.*
|
87
src/collections/_posts/2020-12-31-year-wrapup.md
Normal file
87
src/collections/_posts/2020-12-31-year-wrapup.md
Normal file
@ -0,0 +1,87 @@
|
||||
---
|
||||
layout: default
|
||||
title: 2020 Wrap-Up
|
||||
description: I wrote a lot of code this year. This post looks back on it all
|
||||
date: 2020-12-31
|
||||
written: 2020-12-09
|
||||
tags:
|
||||
- project
|
||||
- frc
|
||||
extra:
|
||||
excerpt: 2020 has been my most productive year so far in terms of software development.
|
||||
This post looks back at the year
|
||||
redirect_from:
|
||||
- /post/g494l5j3/
|
||||
- /g494l5j3/
|
||||
aliases:
|
||||
- /blog/2020/12/31/year-wrapup
|
||||
- /blog/year-wrapup
|
||||
---
|
||||
|
||||
*So, whats up with 2020?* For readers who do not know me personally, here is a quick overview:
|
||||
|
||||
- I made over 6000 commits to over 300 open source projects
|
||||
- I passed both 300 and 400 GitHub repositories on my account (and am on track to pass 500 any second)
|
||||
- I lead software development at [Raider Robotics](https://github.com/frc5024) for my third year
|
||||
- I published my largest open source project
|
||||
- I got to do a summer internship at Toronto-based animation studio [Industrial Brothers](https://www.industrialbrothers.com/), working on pipeline software
|
||||
- This website now gets over 300 readers per month (wow!)
|
||||
|
||||
## Robotics
|
||||
|
||||
This year, I packed a lot of robotics work into a small amount of time. Starting in the first week of January, through the beginning of March, I worked with close to 100 other highschool students at *Raider Robotics* to develop our most successful robot of recent time: [Darth Raider](https://www.thebluealliance.com/team/5024/2020).
|
||||
|
||||
<div class="center" markdown="1">
|
||||
<iframe width="443" height="249"
|
||||
src="https://www.youtube.com/embed/iF-p-rTo8Xk" frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen>
|
||||
</iframe>
|
||||
|
||||
*The full source code and tooling for this robot is [public](https://github.com/frc5024/InfiniteRecharge)*
|
||||
|
||||
</div>
|
||||
|
||||
This robot brought us all the way to the finals of our only competition this year (before the world got shut down). It was only in the finals that we finally lost our winning streak (and strong #1 place) due to some questionable scoring and a broken component on one of our teammate's robots.
|
||||
|
||||
On the software side of this machine, [I pushed to switch the core development language over to Java](@/blog/2019-06-24-LanguageHunt2.md), which went very well, and the team seems to be on track to stay with this new language and toolset for the forseeable future. This year, we pushed very hard towards our goal of letting software handle as much of the "hard work" of operation as possible. In previous years, our robots mainly acted as stupidly expensive RC cars with custom controls, but this year, we wanted to offload tasks prone to human error to computers.
|
||||
|
||||
We were able to design a fully autonomous shooting system using high-speed computer vision, real-time path planning, and ball trajectory models to allow our operators to make the robot score game pieces by pressing and holding a single button. On top of this scoring system, *Darth Raider* featured fully autonomous and real-time-error-correcting spatial navigation, allowing us to input a list of goal coordinates for the robot to navigate to efficiently. The final large autonomously controlled system of this robot was known as the "hopper"; a long tunnel for storing and stacking balls. This system was 100% software controlled, and made use of an amazing predictive sorting system developed by @rsninja722 that would perfectly align balls as they were fed in to the robot. Below is a clip taken from semi-finals where we wrote an experimental system that allowed us to essentially use two completely separate robots as one, effectively doubling our gamepiece storage capacity from our max 4 balls to 7. (Big thanks to the [Falcons](https://www.thebluealliance.com/team/5032) for letting us subject them to this experiment.)
|
||||
|
||||

|
||||
|
||||
For a few months after we finished competing, I went on to publish my largest open source project to date: [Lib5K](https://github.com/frc5024/Lib5k).
|
||||
|
||||
> Lib5K is the software library that powers the Raider Robotics control system. It originally started as a summer project by @ewpratten back in the 2018 offseason. [...] Lib5K development really picked up during summer 2019, where the library (and all of Raider Robotics development) switched from C++ to Java Native. This switch also brought a lot of the core features to Lib5K, and the whole team got involved in development during the 2020 season. \[source: [Lib5K Wiki](https://cs.5024.ca/lib5k/)]
|
||||
|
||||
My goal with Lib5K was to design a way for myself to pass along my knowledge and learnings to future team members in an easy-to-digest way. According to internal team productivity metrics, I have made around 650,000 edits to this library, making it my most contributed-to project ever.
|
||||
|
||||
## Personal projects
|
||||
|
||||
During a rewrite of this website I did earlier this year, I implemented a new section on the homepage, where I list all of my major projects. This list is ever-growing, and generally a good place to see what I am working on.
|
||||
|
||||
<!-- This year, I have spent my time in the following development categories:
|
||||
|
||||
- Libraries
|
||||
- CLI
|
||||
- Web
|
||||
- Pipeline -->
|
||||
|
||||
All the code I have written this year has lead to the need to build a plethora of common software libraries in my three main languages: Python, Java, and C/C++. Through the process of building these, I have picked up many new skills like: properly unit-testing software, [building reliable library distribution systems](@/blog/2020-09-17-Ultralight-writeup.md), and extensively documenting code.
|
||||
|
||||
In the web world, I have learned to work with [JamStack](https://jamstack.org/), and have deployed many serverless / lambda-powered web applications, mostly based on [Flask](https://github.com/pallets/flask) or [Jekyll](https://jekyllrb.com/). A list of my repositories that use these technologies can be found [here](https://github.com/search?l=&q=user%3AEwpratten+filename%3Anow.json&type=code).
|
||||
|
||||
I have also picked up low-level programming for systems running on the [AVR Microprocessor architecture](https://en.wikipedia.org/wiki/AVR_microcontrollers). I have found AVR programming to be a fun and generally easy way to learn about very low-level computing: interrupts, timers, I/O, serial busses, memory management, etc. I also used this as an opportunity to learn how to use a powerful new build system developed by Google, called [*Bazel*](/categories?c=bazel). Many of my projects this year have been shifting over to build with *Bazel* as I really enjoy the build environment and tooling available. I have also used *Bazel* to build [my popular school note-taking system](@/blog/2020-08-23-Notetaking-with-LaTeX.md).
|
||||
|
||||
A list of the over 200 personal projects I have worked on this year (including unfinished projects) can be found with [this query](https://github.com/search?l=&q=user%3AEwpratten+created%3A%22%3E+2020-01-01+%3C+2021-01-01%22&type=repositories).
|
||||
|
||||
## Finishing up
|
||||
|
||||
I'll end this post with a few things that did not get to be their own major section:
|
||||
|
||||
### My programming challenge
|
||||
|
||||
People who know me in real life know of a bit of a challenge I set for myself a while ago (although I don't actually try very hard to keep up). I have now gone a year without a break from programming any longer than three days (completely accidental), and two years without a break any longer than five days. (yes, this is the secret to how I have so many projects, I never stop writing code).
|
||||
|
||||
### This website
|
||||
|
||||
I have now experimented with three posting schedules for this website: monthly, bi-weekly, and weekly. Monthly posts were too spread-apart, and left this site feeling a little empty. I switched to weekly posting through the summer, which worked out great. Since school started again, I have moved to bi-weekly posts, writing each post a few weeks before publishing it (hover over the date of any post to see the date I wrote it). The bi-weekly system seems to be working very well, and I will likely stick to it until summer 2021, so enjoy more content fairly regularly (and remember to subscribe to my [RSS Feed](/rss.xml)).
|
80
src/collections/_posts/2020-12-4-galliumos.md
Normal file
80
src/collections/_posts/2020-12-4-galliumos.md
Normal file
@ -0,0 +1,80 @@
|
||||
---
|
||||
layout: default
|
||||
title: Upgrading my chromebook
|
||||
description: The process of installing GalliumOS on an ACER R11
|
||||
date: 2020-12-04
|
||||
written: 2020-10-31
|
||||
tags: project laptop hardware
|
||||
extra:
|
||||
excerpt: Performing some upgrades to my old laptop. This post outlines the setup
|
||||
process for installing GalliumOS
|
||||
redirect_from:
|
||||
- /post/gk3jEkd4/
|
||||
- /gk3jEkd4/
|
||||
aliases:
|
||||
- /blog/2020/12/04/galliumos
|
||||
- /blog/galliumos
|
||||
---
|
||||
|
||||
My previous development laptop was an [Acer R11](https://www.acer.com/ac/en/CA/content/series/acerchromebookr11) chromebook. I always ran it in [developer mode](https://chromium.googlesource.com/chromiumos/docs/+/master/developer_mode.md) with all the Linux packages I needed installed via [chromebrew](https://github.com/skycocker/chromebrew). This setup worked great except for GUI programs, as (at the time), the built-in [Wayland](https://en.wikipedia.org/wiki/Wayland_(display_server_protocol)) server on the chromebook was not exposed to the user in a meaningful way. I relied on an internal tool from Google called [sommelier](https://chromium.googlesource.com/chromiumos/platform2/+/HEAD/vm_tools/sommelier/) to translate X11 calls to the internal Wayland server. None of this was ideal, but with a lot of scripts and aliases, I made it work.
|
||||
|
||||
Recently, I decided to remove the locked-down ChromeOS all together, and set the laptop up with [GalliumOS](https://galliumos.org) so it can be used as a lightweight code-review machine with access to some useful tools like [VSCode](https://code.visualstudio.com/) and [GitKraken](https://www.gitkraken.com/). This whole process is actually fairly easy, and a good way to breathe new life in to an old chromebook. This guide will be R11-specific, but the process doesn't vary too wildly between models.
|
||||
|
||||
## Developer mode
|
||||
|
||||
A standard feature on chromebooks is "developer mode". This is a hidden boot mode that is designed to give [ChromiumOS](https://www.chromium.org/chromium-os) contributors and Google developers access to debug tools when testing new OS builds. Along with debug tools, this mode also exposes a Linux terminal with root access to the user via <kbd>Ctrl</kbd> + <kbd>Alt</kbd> + <kbd>-></kbd>. On an extremely locked down system like a chromebook, this terminal access exposes a lot of new capability. For this use case, we will only use it to modify the system bootloader.
|
||||
|
||||
To enable developer mode, simply press <kbd>Esc</kbd> + <kbd>Refresh</kbd> + <kbd>Power</kbd>, and let the chromebook reboot. Once the recovery screen pops up, press <kbd>Ctrl</kbd> + <kbd>D</kbd>, and the device is now in developer mode.
|
||||
|
||||
## Write protection
|
||||
|
||||
This step will void your device's warranty. Chromebooks are able to handle anything you throw at them. Even if you were to delete important system files to the point the device can no longer boot, hopping in to recovery mode can reset the device to a working state. This works via ChromeOS's write protect mechanism. All important files are protected by hardware-enforced write protection. Since the process of loading a new operating system onto the device involves overwriting important system files (like the BIOS), we must physically disable write protection.
|
||||
|
||||
Luckily, on the Acer R11, this process is very simple. Firstly, unscrew the laptop's bottom plate to expose the motherboard (some screws are hidden under rubber feet). With the backplate off, you will find a screw that looks like this:
|
||||
|
||||

|
||||
|
||||
The screw is hard to miss, it is beside the WIFI card, an has an arrow pointing to it. Simply remove it, and put the laptop back together. You now have a fully unlocked device.
|
||||
|
||||
## Flashing a custom bootloader
|
||||
|
||||
[Mr Chromebox](https://mrchromebox.tech), a well known person in the world of Chromebook modification, provides and maintains a very easy to use shell script that handles bootloader modifications automatically. To use this tool, open up the ChromeOS terminal (<kbd>Ctrl</kbd> + <kbd>Alt</kbd> + <kbd>-></kbd>), log in with the username `chronos` (you must already be logged in to your personal Google account. This will not work from the login screen), and run:
|
||||
|
||||
```sh
|
||||
crossystem dev_boot_usb=1 dev_boot_legacy=1
|
||||
cd; curl -LO mrchromebox.tech/firmware-util.sh
|
||||
sudo install -Dt /usr/local/bin -m 755 firmware-util.sh
|
||||
sudo firmware-util.sh
|
||||
```
|
||||
|
||||
This will open up the `firmware-util` settings screen.
|
||||
|
||||

|
||||
|
||||
You will want to select the `RW_LEGACY` option to load the `RW_LEGACY` / SEABIOS payload. The `UEFI` option is technically the better choice, but it will completely remove the device's ability to run ChromeOS again in the future.
|
||||
|
||||
### Setting fuses
|
||||
|
||||
The `RW_LEGACY` payload only works if the laptop always has power. Once the device completely runs out of power, the boot settings are wiped from the device (not something we want). The solution is to modify the [system `gbb` fuses](https://chromium.googlesource.com/chromiumos/platform/vboot/+/master/_vboot_reference/firmware/include/gbb_header.h). This sounds complicated (and it is), but Mr Chromebox comes to the rescue again with the `GBB Flags` option in his script. *After* the `RW_LEGACY` payload has been configured, run his script again, and select `GBB Flags`.
|
||||
|
||||
## Installing GalliumOS
|
||||
|
||||
On another computer, [download GalliumOS](https://galliumos.org/download) (make sure to select the `Braswell` option), and [create a bootable USB](https://wiki.galliumos.org/Installing/Creating_Bootable_USB). Plug this USB into the Chromebook, reboot, and press <kbd>Ctrl</kbd> + <kbd>L</kbd> as the warning screen pops up. This will begin the GalliumOS setup process (which is identical to that of Ubuntu).
|
||||
|
||||
### Enabling verbose boot
|
||||
|
||||
It is nice to know what is happening when the device is booting. To disable the boot animation and replace it with the boot log, edit `/etc/default/grub`, and replace both the `quiet` and `splash` arguments with `noplymouth` in the `GRUB_CMDLINE_LINUX_DEFAULT` options. Next, run the following, then reboot:
|
||||
|
||||
```sh
|
||||
sudo update-grub
|
||||
```
|
||||
|
||||
<!--
|
||||
https://imgur.com/a/GuyYz
|
||||
|
||||
https://medium.com/@simstems/how-i-got-the-acer-chromebook-r11-cb5-132t-to-run-parrot-security-os-without-crouton-d282a110060a
|
||||
|
||||
https://wiki.galliumos.org/Hardware_Compatibility
|
||||
|
||||
https://chromium.googlesource.com/chromiumos/platform/vboot/+/master/_vboot_reference/firmware/include/gbb_header.h
|
||||
-->
|
36
src/collections/_posts/2021-01-16-printer-tunneling.md
Normal file
36
src/collections/_posts/2021-01-16-printer-tunneling.md
Normal file
@ -0,0 +1,36 @@
|
||||
---
|
||||
layout: default
|
||||
title: Tunneling a printer from a home network to a VPN
|
||||
description: Using socat to port-forward between network interfaces
|
||||
date: 2021-01-16
|
||||
written: 2020-12-19
|
||||
tags:
|
||||
- project
|
||||
- tutorial
|
||||
extra:
|
||||
excerpt: I use a self-hosted VPN to access all my devices at all times, and to deal
|
||||
with my school's aggressive firewall. This post explains the process I use for
|
||||
exposing my home printer to the VPN.
|
||||
redirect_from:
|
||||
- /post/g494ld99/
|
||||
- /g494ld99/
|
||||
aliases:
|
||||
- /blog/2021/01/16/printer-tunneling
|
||||
- /blog/printer-tunneling
|
||||
---
|
||||
|
||||
For the past few years, I have been using a self-hosted VPN to bring all my personal devices into the same "network" even though many of them are spread across various locations and physical networks. This system never gives me problems, but there was one thing I wished I could do: access non-VPN devices on other networks using one of my VPN devices as a gateway.
|
||||
|
||||
Of course, I could actually grab a RaspberryPI and turn it into a real network gateway for the VPN, allowing me to access anything I want as long as it was attached to that PI's network interface. This setup was not entirely practical though, as I wanted the ability to pull multiple devices from multiple networks into my VPN.
|
||||
|
||||
Doing a quick search for solutions around the internet lead me to find a bunch of long and visually complex [`iptables`](https://linux.die.net/man/8/iptables) commands I could run, but I wanted something much simpler. Further searching lead me to find [`socat`](https://linux.die.net/man/1/socat).
|
||||
|
||||
> **Socat** is a command line based utility that establishes two bidirectional byte streams and transfers data between them. Because the streams can be constructed from a large set of different types of data sinks and sources (see address types), and because lots of address options may be applied to the streams, socat can be used for many different purposes. [manpages socat(1)]
|
||||
|
||||
As stated in the Linux manpages, socat is essentially a port-forwarding utility. Using this, I am able to expose my local printer to my VPN through a RaspberryPI using this short command:
|
||||
|
||||
```sh
|
||||
socat tcp-listen:9100,reuseaddr,fork tcp:<printer_ip>:9100
|
||||
```
|
||||
|
||||
I have also published a small tool called [`localexpose`](https://github.com/Ewpratten/localexpose) that does the same thing with a bit of a nicer argument syntax.
|
195
src/collections/_posts/2021-02-25-kbfs-maven.md
Normal file
195
src/collections/_posts/2021-02-25-kbfs-maven.md
Normal file
@ -0,0 +1,195 @@
|
||||
---
|
||||
layout: default
|
||||
title: Using KBFS as a makeshift maven server
|
||||
description: A free and secure way to host personal Java libraries and applications
|
||||
date: 2021-02-25
|
||||
written: 2021-02-22
|
||||
tags:
|
||||
- maven
|
||||
- project
|
||||
- java
|
||||
extra:
|
||||
excerpt: In my never-ending hunt for a suitable solution for hosting Java libraries,
|
||||
I take a stop to try out Keybase Filesystem (KBFS)
|
||||
redirect_from:
|
||||
- /post/g4lk45j3/
|
||||
- /g4lk45j3/
|
||||
aliases:
|
||||
- /blog/2021/02/25/kbfs-maven
|
||||
- /blog/kbfs-maven
|
||||
---
|
||||
|
||||
As I continue to write more and more Java libraries for personal and public use, I keep finding myself limited by my library hosting solutions. Maven servers are currently my go-to way of storing and organizing all things Java. I have gone through a solid handful of servers over the past few years, here are my comments on each:
|
||||
|
||||
- GitHub Releases
|
||||
- No [dependabot](https://dependabot.com/) integration
|
||||
- No easy way to get Gradle to load files directly from GitHub
|
||||
- [JitPack](https://jitpack.io/)
|
||||
- Slow builds
|
||||
- No easy way to publish custom artifacts or use custom groups
|
||||
- Sometimes unusably long cache policy
|
||||
- [Ultralight](@/blog/2020-09-17-Ultralight-writeup.md)
|
||||
- Has a file transfer limit
|
||||
- Uses my personal API keys to interact with GitHub
|
||||
- No way to automate package updates
|
||||
- [GitHub Packages](https://github.com/features/packages)
|
||||
- Requires users to authenticate even for public assets
|
||||
- Has a file transfer limit
|
||||
- Uses a separate maven url per project
|
||||
|
||||
As a student, I prefer not to do the sensible solution--*spin up an [Artifactory](https://jfrog.com/artifactory/) server*--as that costs money I could be spending on coffee.
|
||||
|
||||
## What makes a maven server special?
|
||||
|
||||
Really, not much. As outlined in my [previous maven-related post](@/blog/2020-09-17-Ultralight-writeup.md), a maven server is just a simple webserver with a specific directory structure, and some metadata files placed in specific locations.
|
||||
|
||||
Let's say we wanted to publish a package with the following attributes:
|
||||
|
||||
| Attribute | Value |
|
||||
| ---------- | ---------------------- |
|
||||
| GroupID | `ca.retrylife.example` |
|
||||
| ArtifactID | `example-artifact` |
|
||||
| Version | `1.0.4` |
|
||||
|
||||
The resulting directory structure would end up looking like:
|
||||
|
||||
```
|
||||
.
|
||||
└── ca
|
||||
└── retrylife
|
||||
└── example
|
||||
└── example-artifact
|
||||
├── maven-metadata.xml
|
||||
├── maven-metadata.xml.sha1
|
||||
└── 1.0.4
|
||||
├── example-artifact-1.0.4.jar
|
||||
├── example-artifact-1.0.4.jar.sha1
|
||||
├── example-artifact-1.0.4.pom
|
||||
└── example-artifact-1.0.4.pom.sha1
|
||||
```
|
||||
|
||||
<div class="center" markdown="1">
|
||||
|
||||
*Generated with [tree.nathanfriend.io](https://tree.nathanfriend.io)*
|
||||
|
||||
</div>
|
||||
|
||||
In this example. I chose to use the `sha1` hashing algorithm, but maven clients support pretty much any algorithm I can think of.
|
||||
|
||||
As you can see, the files are layed out very logically. Packages are organized similarly to how you organize your source code; each artifact is accompanied by a [Project Object Model](https://maven.apache.org/guides/introduction/introduction-to-the-pom.html) describing it, `maven-metadata` files keep track of versioning, and every file also has a hash alongside it.
|
||||
|
||||
For reference, the `maven-metadata.xml` in this example would look something like this:
|
||||
|
||||
```xml
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<metadata>
|
||||
<groupId>ca.retrylife.example</groupId>
|
||||
<artifactId>example-artifact</artifactId>
|
||||
<versioning>
|
||||
<release>1.0.4</release>
|
||||
<latest>1.0.4</latest>
|
||||
<versions>
|
||||
<version>1.0.4</version>
|
||||
</versions>
|
||||
<lastUpdated>20210216203206</lastUpdated>
|
||||
</versioning>
|
||||
</metadata>
|
||||
```
|
||||
|
||||
As far as I know, `maven-metadata` is not actually required, but I always include them so that I can make use of [dynamic versions](https://docs.gradle.org/current/userguide/dynamic_versions.html) in Gradle.
|
||||
|
||||
## Using a static CDN as a maven server
|
||||
|
||||
Since there is nothing special about a maven server aside from its directory structure, anywhere that can host files can become a server. My choice for now is [Keybase](https://keybase.io/)'s [KBFS](https://book.keybase.io/docs/files). KBFS is a pgp-signed file store that allows every user 250GB of free storage. This web filesystem is mounted to the user's device using [FUSE](https://www.kernel.org/doc/html/latest/filesystems/fuse.html) in a similar way to [rclone](https://rclone.org/).
|
||||
|
||||
This local mount & sync setup allows me to interact with my `/keybase` mountpoint like any other directory, while having all its contents automatically backed up and published.
|
||||
|
||||
### Taking advantage of this
|
||||
|
||||
Gradle's [`maven-publish`](https://docs.gradle.org/current/userguide/publishing_maven.html) plugin is designed to publish packages to remote servers, but will also work with local URIs. Simply pointing a [`MavenPublication`](https://docs.gradle.org/current/dsl/org.gradle.api.publish.maven.MavenPublication.html) to `/keybase/public/ewpratten/maven/release` (my directory of choice for now) will automatically generate everything mentioned in the section about file structure above.
|
||||
|
||||
My exact configuration for doing this in gradle is as follows ([source](https://github.com/Ewpratten/gradle_scripts/blob/master/keybase_publishing.gradle)):
|
||||
|
||||
```groovy
|
||||
apply plugin: "maven-publish"
|
||||
|
||||
// Determine SNAPSHOT vs release
|
||||
def isRelease = !project.findProperty("version").contains("-SNAPSHOT")
|
||||
if (!isRelease) {
|
||||
println "Detected SNAPSHOT"
|
||||
}
|
||||
|
||||
publishing {
|
||||
repositories {
|
||||
maven {
|
||||
name = "KBFS"
|
||||
if (isRelease) {
|
||||
url = uri(
|
||||
project.findProperty("kbfs.maven.release") ?: "/keybase/public/ewpratten/maven/release"
|
||||
)
|
||||
} else {
|
||||
url = uri(
|
||||
project.findProperty("kbfs.maven.snapshot") ?: "/keybase/public/ewpratten/maven/snapshot"
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This configuration is a bit fancy as it will separate snapshots from releases, and allow me to completely override the endpoint(s) in my `settings.gradle` file if I choose. A minimal approach would be:
|
||||
|
||||
```groovy
|
||||
apply plugin: "maven-publish"
|
||||
|
||||
publishing {
|
||||
repositories {
|
||||
maven {
|
||||
name = "KBFS"
|
||||
url = uri("/keybase/public/<your username>/maven")
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Pretty URLs
|
||||
|
||||
With the solution outlined in this post, the end user would end up specifying one of the following URLs in their maven client:
|
||||
|
||||
- `https://<username>.keybase.pub/maven/release/`
|
||||
- `https://<username>.keybase.pub/maven/snapshot/`
|
||||
|
||||
While that is perfectly fine, I prefer to keep all of my projects / services / etc under my personal domain (`retrylife.ca`). Unlike the rest of this post, this step does cost some money.
|
||||
|
||||
I already rent two servers for various other projects, and one of them is running the [Caddy](https://caddyserver.com/) webserver and acting as a reverse proxy. I have pointed two domains (`release.maven.retrylife.ca` and `snapshot.maven.retrylife.ca`) at this server and am using the following rules to route them:
|
||||
|
||||
```text
|
||||
release.maven.retrylife.ca {
|
||||
route /* {
|
||||
rewrite * /maven/release/{path}
|
||||
reverse_proxy https://ewpratten.keybase.pub {
|
||||
header_up Host ewpratten.keybase.pub
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
snapshot.maven.retrylife.ca {
|
||||
route /* {
|
||||
rewrite * /maven/snapshot/{path}
|
||||
reverse_proxy https://ewpratten.keybase.pub {
|
||||
header_up Host ewpratten.keybase.pub
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This means that I can point users at one of the following domains, and they will get the packages they are looking for:
|
||||
|
||||
- `https://release.maven.retrylife.ca/`
|
||||
- `https://snapshot.maven.retrylife.ca/`
|
||||
|
||||
I am also now able to switch out backend servers / services whenever I want, and users will see no difference.
|
||||
|
||||
## Future improvements
|
||||
|
||||
Some time in the future, I plan to move from KBFS to the S3-based [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces/) so I can speed up the download time for packages, and have better global distribution of files.
|
212
src/collections/_posts/2021-03-14-qmk-vortex-core.md
Normal file
212
src/collections/_posts/2021-03-14-qmk-vortex-core.md
Normal file
@ -0,0 +1,212 @@
|
||||
---
|
||||
layout: default
|
||||
title: How I flashed QMK to my Vortex Core
|
||||
description: Open-source firmware on a closed-source keyboard
|
||||
date: 2021-03-14
|
||||
written: 2021-03-14
|
||||
tags:
|
||||
- project
|
||||
- keyboards
|
||||
- firmware
|
||||
- walkthrough
|
||||
extra:
|
||||
excerpt: After having some issues with the factory firmware on my 40% keyboard,
|
||||
I decided to replace it with the widely used QMK firmware instead.
|
||||
redirect_from:
|
||||
- /post/gkedkd93/
|
||||
- /gkedkd93/
|
||||
aliases:
|
||||
- /blog/2021/03/14/qmk-vortex-core
|
||||
- /blog/qmk-vortex-core
|
||||
---
|
||||
|
||||
Last fall, I [purchased my first mechanical keyboard](@/blog/2020-11-06-Vortex-Core.md), the [Vortex Core](https://mechanicalkeyboards.com/shop/index.php?l=product_detail&p=3550), and have been loving it ever since. Well, almost loving it. There are a few "quirks" of the keyboard that I wasn't super fond of, like: occasionally not sending `KEY_UP` commands back to the computer, or the badly documented and maintained system for building custom layouts.
|
||||
|
||||
In my previous post on this keyboard, I had mentioned @ChaoticEnigma's fork of [QMK](https://github.com/qmk/qmk_firmware) for the core. This custom firmware had been sitting on my mind for a while, and I finally decided to try it out on my keyboard. This post will cover the process of loading QMK on to a non-supported Vortex Core keyboard.
|
||||
|
||||
The following are all the steps required to complete this process. Make sure to **read them all before getting started**. As per usual when I am outlining ways to modify hardware, you might brick your keyboard doing this so: *be careful*.
|
||||
|
||||
<!-- no toc -->
|
||||
- [Compiling the toolchain](#compiling-the-toolchain)
|
||||
- [OpenOCD](#openocd)
|
||||
- [Pok3rtool](#pok3rtool)
|
||||
- [Intermediary firmware](#intermediary-firmware)
|
||||
- [Compiling the firmware](#compiling-the-firmware)
|
||||
- [Finding debugging hardware](#finding-debugging-hardware)
|
||||
- [Connecting to the core's JTAG interface](#connecting-to-the-core-s-jtag-interface)
|
||||
- [Unlocking the keyboard](#unlocking-the-keyboard)
|
||||
- [Flashing QMK](#flashing-qmk)
|
||||
|
||||
## Compiling the toolchain
|
||||
|
||||
Firstly, you'll need all the software tools required to interface with the keyboard. The following list contains GitHub links to everything needed (this is all for Linux of course):
|
||||
|
||||
- [OpenOCD patched with HT32 support](https://github.com/ChaoticConundrum/openocd-ht32)
|
||||
- [Commandline interface tool](https://github.com/pok3r-custom/pok3rtool)
|
||||
- [Unlocked core firmware](https://github.com/pok3r-custom/pok3r_re_firmware)
|
||||
|
||||
### OpenOCD
|
||||
|
||||
What is [OpenOCD](http://openocd.org/)?
|
||||
|
||||
> OpenOCD is a free-software tool mainly used for on-chip debugging, in-system. programming and boundary-scan testing. OpenOCD supports flashing and. debugging a wide variety of platforms such as: ARMv5 through latest ARMv8. [source: [Debian WIKI](https://wiki.debian.org/OpenOCD)]
|
||||
|
||||
OpenOCD is a standard tool / interface that allows you to use many different types of hardware debuggers interchangeably. It is a very useful project for tasks like this, where we will need to connect directly into an embedded chip via its debugging ports.
|
||||
|
||||
The link provided above for OpenOCD is actually a fork of the main project that specifically adds support for the [Holtek HT32F165x](https://www.keil.com/dd2/holtek/ht32f1655/) MCU (the chip that powers the keyboard).
|
||||
|
||||
After cloning the GitHub repo, the build process is fairly simple:
|
||||
|
||||
```sh
|
||||
cd openocd-ht32
|
||||
|
||||
# Install the build dependencies
|
||||
sudo apt install build-essential pkg-config libtool
|
||||
|
||||
# Configure the build system
|
||||
./bootstrap
|
||||
|
||||
# These arguments may change depending on the hardware debugger you are using. I use the ST-LinkV2
|
||||
# Check the official OpenOCD documentation for more information on this
|
||||
./configure --enable-stlink --disable-werror
|
||||
|
||||
# Build
|
||||
make
|
||||
sudo make install
|
||||
```
|
||||
|
||||
### Pok3rtool
|
||||
|
||||
`pok3rtool` is the commandline interface tool designed specifically for interacting with the firmware on Vortex keyboards. It is fairly unstable, but I can confirm that all the commands used in this guide work just fine.
|
||||
|
||||
The build process for `pok3rtool` is a little different as it requires you to clone the GitHub repo, then create a new directly called `pok3rtool-build` **beside** the cloned repo. Aside from that quirk, the process for building `pok3rtool` is as follows:
|
||||
|
||||
```sh
|
||||
cd pok3rtool
|
||||
|
||||
# Pull the git submodules
|
||||
git submodule update --init --recursive
|
||||
|
||||
cd ../pok3rtool-build
|
||||
|
||||
# Build the tool
|
||||
cmake ../pok3rtool
|
||||
make
|
||||
```
|
||||
|
||||
### Intermediary firmware
|
||||
|
||||
Part of the firmware upgrade process involves loading some intermediary firmware on to the keyboard. This is simply the stock Vortex Core firmware, but with the security bit disabled on the chip. This allows us to perform all further firmware upgrades over the keyboard's USB port instead of through JTAG.
|
||||
|
||||
Building this is as simple as cloning the repo, then running `make`.
|
||||
|
||||
## Compiling the firmware
|
||||
|
||||
When it comes to the final firmware, I have [my own fork](https://github.com/Ewpratten/qmk_core/) of @ChaoticEnigma's fork of QMK. You could use @ChaoticEnigma's fork, but I would recommend my own, since I am specifically maintaining it for the Vortex Core, and my fork has a few more features (like a proper layout for the core).
|
||||
|
||||
You have the option between four QMK keyboard layouts for the Vortex Core:
|
||||
|
||||
- @ChaoticEnigma's `default`
|
||||
- A remake of the factory layout (missing a few function keys)
|
||||
- @ChaoticEnigma's `chaotic`
|
||||
- Presumably @ChaoticEnigma's personal layout. No idea what it does
|
||||
- `ewpratten`
|
||||
- My personal layout
|
||||
- `better_default`
|
||||
- My full remake of the factory layout
|
||||
|
||||
To build my QMK fork, run the following:
|
||||
|
||||
```sh
|
||||
cd qmk_core
|
||||
|
||||
# Fetch everything needed to build QMK
|
||||
make git-submodule
|
||||
|
||||
# Build the layout of your choosing
|
||||
# For example, I use: make vortex/core:ewpratten
|
||||
make vortex/core:<layout_name>
|
||||
```
|
||||
|
||||
## Finding debugging hardware
|
||||
|
||||
As mentioned in the [OpenOCD](#openocd) section, I am using a clone of the [ST-Link/V2](https://www.st.com/en/development-tools/st-link-v2.html), which I picked up for a few dollars [from ebay](https://www.ebay.com/itm/ST-Link-V2-OpenOCD-On-Chip-Debugger-STM8-STM32-JTAG-SWIM-Linux-OSX-Arduino/254315946241?hash=item3b36696101:g:Uq0AAOSwYRJdQWxj). You can use any OpenOCD-supported debugger though.
|
||||
|
||||
The next section will assume you have an ST-Link when I talk about I/O pin names.
|
||||
|
||||
## Connecting to the core's JTAG interface
|
||||
|
||||
Its finally time to open up the keyboard. This is pretty simple, there are five screws hidden under the keycaps. Just remove the caps, and the screws.
|
||||
|
||||
On the bottom of the keyboard, you'll see a serial number. If this is **not** `CYKB175_V03 20160511`, stop right now, and do not proceed with this guide. This is the only model supported.
|
||||
|
||||
In between the `LED80` and `LED66` markings on the PCB (just below the serial number), you'll find an empty 5-pin header. For the sake of simplicity, I'll number them 1 to 5, where 1 is the pin closest to the serial number. Connect them to your hardware debugger as follows (this will require soldering, or small clips):
|
||||
|
||||
| Keyboard Pin | Debugger Pin |
|
||||
| ------------ | ------------ |
|
||||
| `1` | **N/A** |
|
||||
| `2` | `SWDIO` |
|
||||
| `3` | `SWCLK` |
|
||||
| `4` | `RST` |
|
||||
| `5` | `GND` |
|
||||
|
||||
## Unlocking the keyboard
|
||||
|
||||
With the keyboard JTAG interface wired up, plug in the keyboard's USB (to provide power), then after the keyboard has connected, plug in the hardware debugger.
|
||||
|
||||
Move to the directory you built OpenOCD in, then run the following:
|
||||
|
||||
```sh
|
||||
cd tcl
|
||||
|
||||
# Connect to the keyboard
|
||||
# The first -f flag of this command will vary depending on the hardware debugger you chose to use
|
||||
../src/openocd -c 'set HT32_SRAM_SIZE 0x4000' -c 'set HT32_FLASH_SIZE 0x10000' -f ./interface/stlink-v2-1.cfg -f ./target/ht32f165x.cfg
|
||||
```
|
||||
|
||||
This will spawn a telnet server on `localhost:4444`. Connect to that with `telnet 172.0.0.1 4444`, then run the following commands over telnet:
|
||||
|
||||
```sh
|
||||
# REMINDER: We are now going to be modifying firmware on the keyboard. If you mess up, you may have just created an expensive brick
|
||||
|
||||
# We need to erase the existing firmware
|
||||
ht32f165x mass_erase 0
|
||||
|
||||
# Now, we write the new firmware
|
||||
# The '0' at the end of this command is very important
|
||||
# This must be an absolute path to where ever you cloned the intermediary firmware
|
||||
flash write_image /path/to/pok3r_re_firmware/disassemble/core/builtin_core/firmware_builtin_core.bin 0
|
||||
```
|
||||
|
||||
The last command will take a solid five minutes, so go grab a snack and *don't bump anything*. There is no progress bar or anything, so enjoy the suspense.
|
||||
|
||||
Assuming all went well, you can run `exit` over telnet, then close OpenOCD. Unplug the hardware debugger and keyboard, then just plug the keyboard back in. It should function like it just came from the factory. If you forgot to unplug the debugger, the keyboard will not function.
|
||||
|
||||
You now have a firmware-unlocked Vortex Core.
|
||||
|
||||
## Flashing QMK
|
||||
|
||||
*Its time for the last step 🎉*
|
||||
|
||||
With the keyboard unlocked, you can technically load anything you want on to it, but lets stick with QMK. The following commands will do all the hard work for you:
|
||||
|
||||
```sh
|
||||
# Go back to where you built pok3rtool
|
||||
cd pok3rtool-build
|
||||
|
||||
# Make sure you can see the keyboard from pok3rtool
|
||||
sudo ./pok3rtool list
|
||||
|
||||
# Flash the firmware
|
||||
# "QMK_CORE_EW" can be whatever you want the version of your keyboard to display
|
||||
# vortex_core_xxxx.bin will be different depending on the keyboard layout you chose to compile
|
||||
sudo ./pok3rtool -t vortex-core flash QMK_CORE_EW /path/to/qmk_core/vortex_core_xxxx.bin
|
||||
```
|
||||
|
||||
Any time you want to update QMK or change layouts, the above commands will be how to do it.
|
||||
|
||||
**NOTE:** sometimes, `pok3rtool` fails to put the keyboard in bootloader mode before flashing new firmware. If this happens, just run the following command before flashing (and you may need to repeat this process a few times to get `pok3rtool` to behave)
|
||||
|
||||
```sh
|
||||
sudo ./pok3rtool -t vortex-core bootloader
|
||||
```
|
97
src/collections/_posts/2021-04-20-direwolf-aprs.md
Normal file
97
src/collections/_posts/2021-04-20-direwolf-aprs.md
Normal file
@ -0,0 +1,97 @@
|
||||
---
|
||||
layout: default
|
||||
title: Building a cheap APRS digipeater
|
||||
description: How I set up my feature-packed APRS digipeater for under $100
|
||||
date: 2021-04-20
|
||||
written: 2021-04-20
|
||||
tags:
|
||||
- project
|
||||
- raspberrypi
|
||||
- aprs
|
||||
- walkthrough
|
||||
- radio
|
||||
extra:
|
||||
excerpt: Using an extra radio and some spare parts, I set up an APRS/APRS-IS/APRStt
|
||||
digipeater. This post covers some of the details.
|
||||
redirect_from:
|
||||
- /post/eb0klDd9/
|
||||
- /eb0klDd9/
|
||||
aliases:
|
||||
- /blog/2021/04/20/direwolf-aprs
|
||||
- /blog/direwolf-aprs
|
||||
---
|
||||
|
||||
***WARNING:** To replicate this project, you **must** be the holder of an amateur radio license in your country*
|
||||
|
||||
I have an extra [Baofeng UV-5R](https://baofengtech.com/product/uv-5r/) lying around, and had no idea what to use it for. The original plan was to set up a UHF simplex repeater with internet linking capabilities, but that project was set back due to my lack of time to figure out how to set up the [Asterisk PBX](https://en.wikipedia.org/wiki/Asterisk_(PBX)).
|
||||
|
||||
After giving up on Asterisk, I was left without ideas once again. That is, until a few days ago when I remembered that the large [APRS](http://www.aprs.org/) network exists, and is fairly easy to experiment with. I have some past experience with APRS, specifically the [APRS-IS](http://www.aprs-is.net/) internet bridge. I have cron jobs running on a few of my computers that fetches their positions through [geo-ip](https://en.wikipedia.org/wiki/Internet_geolocation) and beacons this info (plus weather info if I feel like it) to the APRS network through APRS-IS. None of that setup has anything to do with radio though, so it feels like I'm not a *true APRS user*.
|
||||
|
||||
A solution to both problems: set up a digipeater.
|
||||
|
||||
## What my radio is doing
|
||||
|
||||
To be specific, I am running much more than just a digipeater. This spare radio is also an APRS-IS [IGate](http://www.aprs-is.net/IGating.aspx), and an [APRStt](http://www.aprs.org/aprstt.html) bridge. The more important of these capabilities is APRStt.
|
||||
|
||||
APRStt is a standard originally designed for the [PSAT2](http://www.aprs.org/psat2.html) satellite, that allows radio operators with non-APRS-compatible radios to send beacons using DTMF sequences. The encoding standard for doing this is not exactly user friendly in my opinion, but it works.
|
||||
|
||||
Combining these radio capabilities with some basic knowledge of the [Maidenhead Locator System](https://en.wikipedia.org/wiki/Maidenhead_Locator_System) on my part allows me to go anywhere in the city with my HT and send beacons to the APRS network using DTMF. Pretty cool in my opinion.
|
||||
|
||||
## Setup Guide
|
||||
|
||||
The following is a mostly complete guide on replicating my digipeater setup. You will have to do some extra reading to understand the configuration system.
|
||||
|
||||
### Required hardware
|
||||
|
||||
To set up a digipeater, you need a controller, a radio, and some hardware to connect the two. All of the parts I use are found below (I did not choose the most cost-effective listings here):
|
||||
|
||||
- [Raspberry Pi 3B+](https://www.ebay.com/itm/193345669838)
|
||||
- [Baofeng UB-5R](https://baofengtech.com/product/uv-5r/)
|
||||
- [USB sound card](https://www.ebay.com/itm/203355827559)
|
||||
- 2x [3.5mm audio cables](https://www.ebay.com/itm/402032141776)
|
||||
- [2.5mm Male to 3.5mm Female adaptor](https://www.ebay.com/itm/202853095248)
|
||||
- [Single-channel relay](https://www.ebay.com/itm/114771147582)
|
||||
- Some female-to-female jumper cables (see [here](https://www.ebay.com/itm/203350136236))
|
||||
- Solder and a soldering iron are also needed for cable modifications
|
||||
|
||||
### Compiling Dire Wolf
|
||||
|
||||
Compiling and setting up the control software, [Dire Wolf](https://github.com/wb2osz/direwolf), is pretty easy. The full guide on this process can be found [here](https://github.com/wb2osz/direwolf/blob/master/doc/Raspberry-Pi-APRS.pdf). I'll summarize below:
|
||||
|
||||
On a fresh install of [Raspbian](https://www.raspberrypi.org/software/operating-systems/#raspberry-pi-os-32-bit):
|
||||
|
||||
```sh
|
||||
sudo apt update
|
||||
sudo apt install cmake libasound2-dev libudev-dev git
|
||||
cd ~
|
||||
git clone https://github.com/wb2osz/direwolf
|
||||
cd direwolf
|
||||
mkdir build && cd build
|
||||
cmake ..
|
||||
make -j4
|
||||
sudo make install
|
||||
make install-conf
|
||||
cd ~
|
||||
```
|
||||
|
||||
You can now launch Dire Wolf by running `direwolf`. See the full guide for info on staring on boot.
|
||||
|
||||
### Building a PTT cable for the Baofeng UV-5R
|
||||
|
||||
Baofeng sells a proper [audio interface cable](https://baofengtech.com/product/aprs-k1/), which will make this process easier, but it is not really needed if you have some basic soldering skills.
|
||||
|
||||
The push-to-talk system on most Baofeng radios works by shorting the ground of the mic cable to the ground of the speaker cable. Interestingly, the USB audio interface listed above automatically does this (aka. PTT is always enabled when the cables are plugged in). My quick solution is to use some wire strippers to open up the 3.5mm cable used for the microphone input, and snipping the ground line. I then just stick the relay in series with this snipped cable, and can enable and disable ground by triggering the relay.
|
||||
|
||||
Plugging the relay into [pin `GPIO14`](https://www.bigmessowires.com/wp-content/uploads/2018/05/Raspberry-GPIO.jpg) of the Raspberry PI will let Dire Wolf have full control over the radio PTT.
|
||||
|
||||
### Configuration
|
||||
|
||||
The entire configuration process is outlined in the Dire Wolf [user manual](https://github.com/wb2osz/direwolf/blob/master/doc/User-Guide.pdf). Here are some additional notes:
|
||||
|
||||
- Set `PTT GPIO 14` in the `CHANNEL 0` section to enable hardware PTT using the relay
|
||||
- Set `DTMF` in the `CHANNEL 0` section to enable APRStt
|
||||
- Uncomment the `DIGIPEAT` configuration to enable digipeating
|
||||
|
||||
## Need help?
|
||||
|
||||
If you happened to follow this guide and need more configuration help, [send me a message](/contact).
|
87
src/collections/_posts/2021-07-06-windows-ssh.md
Normal file
87
src/collections/_posts/2021-07-06-windows-ssh.md
Normal file
@ -0,0 +1,87 @@
|
||||
---
|
||||
layout: default
|
||||
title: Configuring a native SSH server on Windows 10
|
||||
description: A tutorial for future me
|
||||
date: 2021-07-07
|
||||
written: 2021-07-07
|
||||
tags:
|
||||
- reference
|
||||
- tutorial
|
||||
extra:
|
||||
excerpt: I commonly need to configure SSH servers on remote Windows 10 boxes. This
|
||||
post covers the whole process.
|
||||
aliases:
|
||||
- /blog/2021/07/07/windows-ssh
|
||||
- /blog/windows-ssh
|
||||
---
|
||||
|
||||
Between work, school, and just helping various people out with things, I end up needing to quickly spin up SSH servers on windows machines *a lot*. Despite what you might think, this functionality is actually built right in to Windows 10, and fairly easy to enable.
|
||||
|
||||
## Enabling the OpenSSH service
|
||||
|
||||
Just like many Linux machines, Windows uses the [OpenSSH](https://www.openssh.com/) server internally. This used to be controlled by a feature flag in the *"Turn Windows features on or off"* dialog, but this can now be done through [PowerShell](https://en.wikipedia.org/wiki/PowerShell) (as a local administrator).
|
||||
|
||||
First, we need to add the OpenSSH capability to Windows, and enable the service:
|
||||
|
||||
```powershell
|
||||
# Add the capability
|
||||
Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
|
||||
Start-Service sshd
|
||||
|
||||
# Start on boot
|
||||
Set-Service -Name sshd -StartupType 'Automatic'
|
||||
```
|
||||
|
||||
This should also automatically configure the firewall, but you can manually verify this and enable the rules yourself if needed:
|
||||
|
||||
```powershell
|
||||
# Check firewall
|
||||
Get-NetFirewallRule -Name *ssh*
|
||||
|
||||
# If needed, add a firewall rule
|
||||
New-NetFirewallRule -Name sshd -DisplayName 'OpenSSH Server (sshd)' -Enabled True -Direction Inbound -Protocol TCP -Action Allow -LocalPort 22
|
||||
```
|
||||
|
||||
## Setting up key-based authentication
|
||||
|
||||
While we are on the Windows side, it is a good idea to install Git and Git Bash from [here](https://git-scm.com/downloads). Then, inside Git Bash, run the following to generate SSH keys on the Windows server:
|
||||
|
||||
```sh
|
||||
# Generate
|
||||
ssh-keygen.exe
|
||||
|
||||
# View the public key
|
||||
cat ~/.ssh/id_rsa.pub
|
||||
```
|
||||
|
||||
On your client (for me, a Linux laptop), you must generate SSH keys, and copy the public key over to the Windows server.
|
||||
|
||||
The path for the file in Windows depends on your user type. Regular users append their keys to `C:\Users\<username>\.ssh\authorized_keys` (remembering to change the `<username>`), whereas local admins must append their keys to `C:\ProgramData\ssh\administrators_authorized_keys`, then update the permissions on that file with:
|
||||
|
||||
```powershell
|
||||
icacls.exe "C:\ProgramData\ssh\administrators_authorized_keys" /inheritance:r /grant "Administrators:F" /grant "SYSTEM:F"
|
||||
```
|
||||
|
||||
## Configuring SSH clients to automatically launch bash
|
||||
|
||||
By default, incoming SSH connections spawn a `cmd.exe` shell. I much prefer being dropped straight into [Bash](https://en.wikipedia.org/wiki/Bash_(Unix_shell)).
|
||||
|
||||
To do this, you must modify your client's `~/.ssh/config` file to add a `RemoteCommand`. An example for one of my machines looks similar to:
|
||||
|
||||
```
|
||||
Host hostname
|
||||
HostName hostname.example.com
|
||||
RequestTTY force
|
||||
User ewpratten
|
||||
RemoteCommand powershell "& 'C:\Program Files\Git\bin\sh.exe' --login"
|
||||
```
|
||||
|
||||
The last line is the actual command to launch Bash (through PowerShell).
|
||||
|
||||
## Uninstalling and disabling OpenSSH
|
||||
|
||||
This is a simple one-liner:
|
||||
|
||||
```powershell
|
||||
Remove-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
|
||||
```
|
Loading…
x
Reference in New Issue
Block a user