Memories and photos of the Key Bridge

I’ve been thinking about the collapse of the Key Bridge pretty much non-stop for the past 2 days. Not only because I’ve driven over it so many times (often early in the morning), or because it’s one of Baltimore’s biggest landmarks, or because I have a fear of driving over bridges – but because I spent so much time photographing it back in the mid to late 2010’s.

One of these photos actually made its way into the news coverage of the bridge’s collapse – to show what it looked like prior to being destroyed. This photo is from 2015 and it’s one of the very first that I took of the bridge. At the time I was pretty new to photography and since I wasn’t looking to make money, I uploaded it to Wikipedia for use on the bridge’s wiki page. It’s stayed there for the past 9 years, acting sort of like the bridge’s profile pic.

Small collage of thumbnails from some of the news sources that used my image

Since it’s a free to use image, it was quickly scooped up and used in many different news stories, which was pretty wild to see. I also saw some of my other images from Flickr used in “in memoriam” posts about the bridge. This one was probably my favorite of the bunch:

It’s weird to think the bridge is gone now. I visited the park next to the bridge dozens of times between 2015 and 2020 to get images of it. Because of how particular I am, only a fraction of those ever saw the light of day.

2016-11-20 Sunrise

The above shot is probably my favorite of the bridge. The sun was actually rising at the end of a pier I was standing on, and the whole sky just look terrific.

2016-06-18 Sunrise

This one is part of a timelapse I did, though I didn’t set the timer as long as I should have, so it’s a little short (8 seconds). I still remember how excited I felt driving into the park this morning, I knew it was going to be a good one.

6-21-2017 Sunrise

This was the first sunrise of the summer. This was actually a still from a timelapse I did but never posted anywhere (until today). Timelapses are a ton of work and often times a still images says so much more, which is why I stopped doing them (that, and videos take up a ton of hard drive space).

2015-06-15 Sunrise

My car hit a pot hole on the way into the park and I got a flat tire. I decided to hike into the park anyway and dealt with the flat after taking this photo.

2016-02-07

This one’s a long exposure, which is why the water is smooth.

2016-11-26

The only time I’ve been scared taking a photo. It was just me and one other person in the park. They drove in at the same time as me but stayed in their car after they parked. They kept honking at me from the parking lot (it had to be me, there was no one else there). It was super creepy. After the sun rose I went back to the parking lot and their car was there but they were gone.

2015-06-16

This is one of the first pics I ever took of the bridge and the one that’s probably been seen the most. I had to hike along a rock peninsula to get to this spot. It was kind of dangerous, but I still remember the awe I felt at seeing the bridge from this spot and the pride at finding such a neat location. The shot itself isn’t great, but It’s not bad (maybe just a tad dark).

2020-06-09

This is the last photo I ever took of the Francis Scott Key Bridge. It has been sitting on my hard drive since the summer of 2020. When I took this I was trying to improve upon the long exposure shot I took back in 2016, but I just wasn’t super happy with it. I also have this bad habit of taking photos and then never posting them anywhere. In fact, this is true of most photos I’ve taken since 2020. It’s sad to think this was my last one of the bridge. I was actually thinking to myself recently that it would be great to go out and shoot again – and I had this place in mind. Maybe I could capture it with a really killer sunrise as the backdrop, but it wasn’t to be.

Anyway, to whoever’s reading this I hope you’re doing well, thank you for taking the time to check out this post.

I think I may be addicted to creating AI art

I’ve watched from the sidelines for about a year now. The photography YouTubers I follow one-by-one fell in love with it, I started seeing a lot of criticism from artists YouTubers, and a few months ago my old brother became enamored with it, reinventing himself as an AI Artist and posting 100’s of images to Facebook. So when a buddy of mine showed me Bing’s Image Creator early last week I was quite interested.

Suffice to say, I’ve fallen hard for this tech. A lot of the AI art I was seeing months ago wasn’t very impressive, but today it seems amazing. I was in awe of this creepy pumpkin it created for me:

“A scary carved jack-o-lantern smoke around it, modern H.R. Giger stylized oil on canvas”

If this was on a t-shirt, I’d consider buying it. It’s just so mind boggling good.

After toying around a bit I wound up becoming entranced with creating this stuff. Early last week they were giving 100 credits per day (now it looks like its only 25). So I had plenty of room to experiment with prompt ideas. And a good prompt is key. I found out early on that if a prompt isn’t yielding good images, just ditch it and move on to something else. However, once you create a good prompt, you seem to get an endless supply of cool images. For example, the above image was created with the prompt “A scary carved jack-o-lantern smoke around it, modern H.R. Giger stylized oil on canvas”. Making subtle tweaks to that prompt you can get all sorts of good stuff.

“A scary carved jack-o-lantern in the clouds with smoke around it, modern H.R. Giger stylized oil on canvas”

And almost any kind of image you can think of it can generate. For example, you can create medical cut-away diagrams with this prompt: “Detailed anatomical cut-away diagram of ___” (I didn’t come up with this one myself, someone told me about it). It can yield all sorts of wild images.

“Detailed anatomical cut-away diagram of a smiling emoji”
“Detailed anatomical cut-away diagram Stay Puft Marshmallow Man from Ghostbusters”

And If you don’t like blood and guts, you can describe what you think should be inside and it’ll do that instead.

“Detailed anatomical cut-away diagram of Santa Claus with tiny elves inside working machinery”
“Detailed anatomical cut-away diagram of Snorlax with tiny pikachus inside operating a computer”

The cut-away effect can also be used in descriptions to create interesting looking stuff:

“A large skull in the clouds with a skeleton living inside of it. A cut-away view shows the skeleton inside drawing on at a desk. modern H.R. Giger stylized oil on canvas”

For some of these images if you look closely you’ll see mistakes. And in the skull one above I actually used Photoshop’s AI generator to fix some of the parts I didn’t like (basically I used the lasso tool to select what I didn’t like, then I had Photoshop’s AI generate something new). However, Photoshop’s AI is waaaaay behind Dalle3.

This thing is also great for memes and in-jokes. At work we have this in-joke about Alf (too complex and long of a story to go into here). I’m able to generate an endless supply of goofy Alf images.

“Profile view of 80’s Alf with a beard and sunglasses. Image is in a fantasy or cartoon or painted style.”

I also have had this weird obsession with Billy Idol’s Cyberpunk album. I actually really like the album, it’s not his best, but it’s so bizarre and off brand, yet I still like the music. I can make all sorts of weird art for the album:

“It’s the 1980’s. Billy Idol is sitting at a computer, his back is to the camera. Inside the computer screen is another world. He’s working on his new album, Cyberpunk 2.0. The world is beginning to come out of the screen. Image is in a fantasy style.”

Are AI Artists Artists?

That’s a good question. I feel like even Bing considers Dalle3 the artist and not the user, otherwise it wouldn’t be blocking so many prompts. After all, you don’t see Photoshop implementing such draconian censorship for what its users can create. But the refining of prompts and editing images after the fact isn’t nothing. Even though my brother considers himself an artist and the AI creations his art, I’m still on the fence about it. I feel more like a slot machine junkie than an artist when creating this stuff. It’s fun and cool to look at – and even functionally useful in some cases too (even if just for a laugh at work). At what point does someone become an artist in anything? And even though this stuff is cool – is it actually art? Is the user’s prompt (the idea for creation) what makes it art? Questions for a longer think piece I suppose, for now I’ll just enjoy the stuff.

What to do with all of this stuff?

At the moment I don’t know. I was putting it on my Flickr, but then quickly deleted it all because that seemed like the wrong place. I started posting some to Twitter, that seems like the most logical place (especially since I’ve never really used my Twitter account). If you’re doing AI art feel free to hit me up over there. For now I’ll leave you all with a random assortment of AI art images I generated in the last few days (a lot of these prompts have suggestions from others – I’m including them so you see how I got the images).

“Painting of an Armageddon of elemental wind, air, and gust, people cowering, raging colored gases and wind, clear details painted in the style of “The Last Day of Pompeii” by Karl Bryullov”
“Painting of an Armageddon of elemental wind, air, and gust, raging colored gases and wind, clear details painted in the style of “The Last Day of Pompeii” by Karl Bryullov”
“whimsical illustration, femininity in a box, vibrant colors, geometric shapes, layered paint, M.C. Escher stylized homage”
“In the distance we see an old man looking up at a black hole sun. modern H.R. Giger stylized oil on canvas”
“Inside a labyrinth in Dracula’s castle, modern H.R. Giger stylized oil on canvas”
“Alice from Alice in wonderland is sitting at a computer, her back is to the camera. Inside the computer screen is another world. The world is beginning to come out of the screen. Image is in a fantasy style.”

Some ESLint rules to better your React code

ESLint is a tool for analyzing your JavaScript code and finding potential problems. A couple months ago I taught myself how to create plugins for it so I could write my own linting rules. I wound up writing two rules, both of which I’ll go into below.

prefer-use-state-lazy-initialization

React has a neat optimization trick for its useState hooks. When doing something like this:

const [data, setData] = useState(expensiveFunctionCall());

You’ll wind up calling expensiveFunctionCall on every render, when all you really wanted was to call it on the first render. To resolve this, React allows you to pass an initializer function to useState rather than an actual value:

const [data, setData] = useState(() => expensiveFunctionCall());

Now expensiveFunctionCall will only be called on the first render.

I was honestly pretty surprised that you could do this, and it’s such a neat little trick that I thought there should be a linting rule that would detect function calls in useState and alert you to using this optimization… and that’s what this rule does. If you’re not taking advantage of this trick, it’ll alert you so that you can change your code to do so.

I thought the rule was cool enough to submit to the React team, but it’s been languishing for months in their PR queue so I don’t see it getting added to their set of eslint rules.

no-named-useeffect-functions

There are a lot of coding tips out there, and not all of them are good. One such tip is naming useEffect functions. On the surface this might seem harmless, but it bloats the code, puts naming any kind of callback into question (for consistently), and is just in general pretty obnoxious. Now you may not agree with me, but if you do and want a rule to find such cases, well, this is that rule.

Only 2?

Yeah, for now at least. 

Is it easy to write these things?

If you’ve ever wondered about how to write your own eslint rule, just check out the project for the rules above. They’re surprisingly easy to write. Once you have the basic boiler plate down, all you really need is an AST Explorer and you can write them pretty quickly (this generator is also pretty useful).

Just in time for Christmas, it’s react-halloween!

Today I want to talk about a silly new JavaScript library I’ve released. It’s called react-halloween and it’s sort of like a Spirit Halloween for your web app. That sounds a little ridiculous, and it kind of is… and also, since I just started this, it’s kind of like a Spirit Halloween that’s been cleaned out after the Halloween rush, so there’s not really a lot there.

What?

Ok, so back in October I decided to hunt around for “Halloween Houses” – basically houses that were completely decked out for Halloween. I thought it might be cool to create a gallery of them for my photography website. After I’d amassed a handful of images I liked, I set up a gallery – but the “stuffiness” of a normal photography gallery did seem to fit the feel of the images to me. 

I mulled over what I could do, and decided to add some fun touches such as ghosts flying out of the gallery link when you moused over it and a pair of eyes that would watch your mouse cursor. I ended up liking these additions, but it didn’t feel right having them as part of the photo-gallery app, so I pulled them into their own package.

This package is what I’m now calling react-halloween. It’s basically just a set of halloween-themed decoration components for React apps. Right now the collection is as follows:

  • Eyes/Eye – This component is either a solo eye or a set of eyes that watch your cursor as its moves around the screen.
  • Haunted – A container that will glow when moused over. Additionally, ghosts can fly out when you mouse over. You can see this in-action at the bottom of patrickgillespie.com.
  • MagicalText – Color faded text with (optional) sparkles (or sparkles with optional faded text). This was inspired by a Hyperplexed video (which was in turn inspired by Linear). This component is a little bit more than what’s described in that video, as the sparkles will fade along with the text (surprising non-trivial but you can jump into the code if you really care about that detail) and other adornments are possible. I think it can be easy to over-do this on a website, and I may have overdone it on my photography site, but it’s a fun effect.
  • LightsOut – On non-mobile devices this will black-out the screen and turn the cursor into a flashlight. You can see it in-action on my Halloween Houses Gallery. When you click your mouse the lights will come on. I used a variation of this effect over 20 years ago on this very site (back in 1999). It was back when “intro” pages were a thing. I dropped the script pretty quickly because repeat visitors quickly got annoyed. However, I don’t imagine many repeat visitors for a gallery like this. It’s the sort of novelty that could possibly work well. But then again, I’m still a little on the fence on this one. Sadly the Internet Archive didn’t take a snapshot of patorjk.com when I was doing this effect. It was a lot more low-fi back then – a solid circle as the flashlight with solid black as a background.

I have a handful of ideas for future components, but I make no promises on where I’m going with this library. Really it’s just something I made to support patrickgillespie.com, and I figured I’d share it with anyone else who was interested. Hopefully someone out there will find it slightly useful and/or amusing.

Will you be doing the same for Christmas houses?

I told myself no, but I’m already getting tips from friends and family about certain Christmas houses, so maybe. There definitely won’t be a “react-christmas” library though. If I were to make any Christmas themed components I’d probably just generalize them and stick them in react-halloween. Plus I’m way more of a Halloween fan. Don’t get me wrong, I love Christmas, but I just like the atmosphere of Halloween more.

What the Github Story on AOL Hacking Culture Left Out

The Github ReadME Project has released a really nice story on the AOL hacking community. I’d heard they were working on an article a few months ago when a fellow dev from back in those days emailed me to say he’d discussed the idea with one of the editors. After that I talked with the article’s author, Klint, but I wasn’t sure if the story was going to come out. The era is 2+ decades old at this point, and it wasn’t clear to me if outsiders would really get what it was all about or if Klint would be able to round up enough information to make a coherent piece. He did a great job investigating and understanding the topic though, and it was cool to see the that article came to fruition. There was even a neat discussion of it at Hacker News.

As an aside, one amusing bit that was left on the cutting room floor was an antidote regarding Mark Zuckerberg that Ben Stone (author of Jaguar32.bas) had recently relayed to me. Leaving this story out was definitely for the best, as it would have been off topic and probably a distraction, but I figured I’d share it here as it’s a fun little story.

Zuckerberg’s AOL Origins

Before creating Facebook, Mark Zuckerberg cut his teeth in the AOL scene by creating the Vadar Fader. It’s a silly little app, though nothing to be ashamed of – I created one too (and it’s still actively used after all these years – a story for another time though). However, an interesting wrinkle was recently added to the story when Ben emailed me to tell me had been examining the Vadar Fader to see if it held any secrets. And well, he found something interesting. What did he find? After poking and prodding he decided to search for text strings inside of the exe file and discovered that Mark had used a well known bas file to create the Vadar Fader. Which bas file? Jaguar32.bas.

Discovering this apparently left Ben in shock, though I think in the back of his mind he knew it to be true. The reason he thought I might find this story interesting though was because he used my API Spy to help guide him through making Jaguar32.bas (along with help from Monk-e-god). It’s a bit amusing to think about, and there’s a joke about the butterfly effect in there somewhere, but I’m sure Mark would have created Facebook even if he’d taken a slightly different path.  

Would Mark Remember?

Even after 20 years I still remember bas files I used. They were my introduction to open source before I even knew what open source was. I even remember the very first one I used – genozide.bas. I sent so many emails to genozide for help with it that he got mad at me. And while I can laugh my part off as the butterfly effect in action, Ben’s work in the AOL hacking scene directly impacted one of the richest and most successful people on the planet, and that’s actually pretty cool.

AOHell

What ever happened to the hacker known as Da Chronic?

A few nights ago a podcast featuring the infamous AOL-hacker Da Chronic showed up in my Twitter feed. The tweet and podcast didn’t garner a lot of attention, but Da Chronic and his 1995 app AOHell were legendary back in the day. With a set of features that allowed users to punt people offline, e-mail bomb, create fake accounts and more, AOHell caused untold amounts of chaos for AOL and its users. In the wake of its release it spawned a vibrant development community around creating “progs” – apps that augmented and added functionality to AOL (not all of these were for causing chaos, most were just for having fun). In fact, this site started as a Visual Basic help site for creating such apps. You can even still download the code generating API Spy I wrote back in the late 90’s, though unless you’re rocking Windows 98 it’s probably not going to be very useful.

The podcast touched on a lot of things I hadn’t thought about in years. I remember AOHell being a little too malicious for my tastes, but I hadn’t realized that it had invented phishing. That may have been one of the reasons my friends and I quickly moved onto other progs, as I vaguely remember thinking I might get in trouble for using such an app. But then again, by the time I got AOL (summer of 97) the scene was flooded with progs, so maybe we just moved on because there were so many choices. One thing I do remember though was that Da Chronic was long gone by the summer of 97. It was like he dropped an atom bomb on AOL and then peaced out. At least that’s what it seemed like from the perspective of my 15 year old self.

AOHell’s Phishing Configuration – screen cap harvested from an archive.org version of justinakapaste.com

A decade ago I wrote about uncovering the identity of MaGuS, who’d written what most would say was the second most well known prog – Fate X. Fate X was like AOHell but without the quasi-illegal features (ie, the phisher and credit card faker). When I caught up with MaGuS he had become a successful developer at a large company. He seemed happy and it was apparent that being apart of the scene had been a positive experience for him. In contrast, Da Chronic’s experiences during this time period seemed to have left him paranoid and ultimately drove him away from a career in software. He also seemed somewhat guilt ridden over creating the first phishing app and coining the term for the attack. At one point he tried brushing this off by surmising that someone else would have invented the practice anyway if he hadn’t, but it still seemed to weigh on him as a blemish on his legacy.

Anyway, if you’re interested in computer security, were apart of the scene back in the day, or even if you’ve just made it this far into this blog post then I highly recommend the podcast below. It gives some great insight into one of the 90’s most infamous hackers. Even though I was more of a “white hat” progger I still owe a lot to this man. Before I stumbled into the AOL development scene I had been thinking of becoming a journalist. I can’t even imagine how different my life would be if I had taken that path.

Fake Like Buttons

This past weekend I updated my photography website to use MUI v5. The upgrade was mostly painless and while I was poking around I did a few other minor updates – the most fun being the ability to toggle Baltimore’s power grid on and off. I think I want to do more interactive stuff like this in the future. It’s only been up for a few days, but that light toggle is by far the most clicked thing on that site.

While I was mucking around in the code I also removed the faux like button that had previously accompanied each photo. The like button hadn’t been connected to anything other than my analytics, but it would shoot out confetti when clicked and fill-in to indicate that a like had been left. Why did I have fake like buttons on the site? Good question…

Back at the start of the year I had the following shower thoughts (those “profound” thoughts that only come to you while you’re in the shower):

  • Have like buttons become the dominant language of the internet? Do people click them to indicate they like something? Or is the clicking of a like button dependent on factors like the platform it’s attached to?
  • How often do people unlike things? And what would happen if a website became defensive and started arguing with you about taking away a like?

I found the second thought especially amusing and after laughing about it to myself I decided I would put something into my new photography website to test these ideas out. And just to be up-front – and before I get too far into this story – I’ll say that the results of this “experiment” weren’t very profound (as are most shower thoughts), but maybe it offers a few nuggets of wisdom somewhere.

Anyway, I launched the new photography site this past February and over a 7 month period 3,333 unique people visited it. On average they spent 1 minute and 13 seconds looking around and in total left 236 likes. This indicates to me that people don’t click like buttons to communicate they like something (my photos aren’t that bad, right? right??). The decision to click seems to be more nuanced. It could also be that people were suspicious of a like button that didn’t have an obvious association.

In retrospect this seems pretty obvious. When I click the thumbs up button on a YouTube video, I do it knowing I’m helping the creator and/or helping the algorithm so it can provide me with personalized recommendations. If I saw a random like button on a website I don’t think I would click it unless I knew what the side effects were.

The second part of the experiment produced another non-shocking conclusion: unliking is pretty rare. I had set things up so that if someone attempted to unlike a photo, the site would popup with a notification letting them know they must have made a mistake and not to worry, the photo was still liked. If the user continued to press the like button, the site would become increasingly frustrated in its notifications to the user until finally calling them a filthy bastard and letting them unlike the photo. 

I laughed at the thought of this happening and after I launched the new site I checked my stats every few days to see if such an event had occurred. I figured that after it happened a few times word would get around and I would see a whole bunch of unlike events in my analytics. However, no such thing happened. After a month I realized that unliking something is a very rare and if I ever wanted a chance of someone seeing my gag I would need to make some alterations. To help with unlikes I decided to add a confetti explosion animation to the like button. I figured some visitors would want to see it more than once, leading them to unlike and re-like a photo.

Over the course of 7 months, 27 unlike events occurred, and only 11 of those made it to the final message (which took 6 clicks). Nearly all of these were me, so this gag was maybe too confusing and too hard to find ?.  As for the remaining 16 unlikes, I assume that most of those were people who just wanted to see the confetti animation again. These people were probably super confused by the message that popped up, and they probably thought I was some super sensitive prude or something. In hindsight, it does seem like an odd gag to put in a fine art photography website.

If there’s anything I learned from this it’s probably that unlikes are rare and that people like with intension – they aren’t going to click something if they don’t know what it’s for. What started as something that had me laughing in the shower ended with me realizing I had made something that was amusing only to me. I suppose it’s probably best to leave shower thoughts at the shower.

Becoming the Maintainer of an Abandoned Open Source Project

Lots of ink will probably be spilled documenting the stories of what people did during the summer of 2020. With COVID-19 spreading throughout the world and many places going into lock-down, it was an uncertain and strange time. Many people took the opportunity to catch up on old TV shows, learn about investing, or create a side hustle. Me? Well, somehow I found myself becoming the maintainer of one of my favorite open source projects.

Part 1: The Big Rewrite

Almost 4 years ago I was gearing up to rewrite a rather huge AngularJS app. I had been a bit disillusioned with Google’s decision to kill off AngularJS (don’t get me started with Angular 2), and I was looking to build an app that wouldn’t be tied to the whims of Google, since they have a tendency to shut things down. After surveying the landscape, a solution involving the open source libraries React, Redux, and Material UI seemed like a good fit. All that was left was finding a good datatable component…

mui-datatables looked clean and worked smoothly. It made my app look beautiful, and I was excited. To add to this, the official github repo seemed to be buzzing with activity. My first feature request got 12 reactions and a bunch of comments. There was clear enthusiasm about this project, and it stood out amongst all of the Material UI based datatables I evaluated. I felt like I’d found the final piece to my puzzle.

For a while everything went smoothly. I was no rush and my work had me juggling several projects, so the rewrite took place in-between other tasks I was doing. It wasn’t until I was a few months in that I realized I needed an ability that mui-datatables didn’t provide. “No worries” I thought, “I’ll just put in another feature request”, but when I checked the repo, I noticed Greg, the library’s creator, was no where to be seen. Instead, someone named Gabriel was now running the show.

At first this was fine, and I was glad to see someone had taken up the mantle and was moving the library forward, but I soon found myself increasingly frustrated. I was so close to having the library that I needed, yet many of my pull requests (PRs) were left to languish and some updates to the library made my life harder. To add to this, Gabriel wanted to rewrite the whole library, deeming its internals fundamentally flawed. He made a pinned issue announcing this big rewrite (“Version 3.0”), and said it would fix the table’s internals and possibility completely change its API. However, after this announcement he continued to work on features for version 2.x, and in updates months after this announcement he would state that he hadn’t started on version 3 and was still thinking about things.

I was concerned. Once again I looked at other datatables, but I was filled with dread when I concluded that mui-datatables was still the best fit for what I was doing. I either had to stick it out and convince Gabriel to take my PRs, or fork the library and be on my way… and well, I forked it. And that actually turned out to be a pretty freeing experience. But there are downsides to forking that cannot be ignored:

  • You don’t get updates from the community – you’re on your own.
  • When you eventually leave your position, you leave the next developer with learning the fork you made. I’ve been in this boat and it’s not a fun boat to be in, especially as the fork gets old and bugs are found.

After the app I had been working on launched, I thought a little more about these 2 points. One of the key reasons for the rewrite (but not the only reason) was to get off of the now archaic AngularJS framework so that future developers could easily jump into the project. But leaving someone with a forked library that was heavily modified seemed to clash with this idea.

I decided to once again check in with mui-datatables to see if there was some way we could reconcile. What I found was 60 PRs, 2 pages of unanswered issues, and someone @-ing Greg and Gabriel, asking if the project was dead. The last release was 3 months old, and it had a number of problems (a malfunctioning resizable column feature, the responsive design was broken, etc etc). After Greg and Gabriel, I was the 3rd biggest contributor to the project, and with all the updates I’d done in my fork, I realized I was probably the most qualified person to take over. To add to this, COVID-19 was spreading around the country and my work had recently closed down. I was at home with nothing to do for the foreseeable future. I needed something to take my mind off things.

I decided to reach out to Greg on Twitter to let him know the situation and ask if I could take over. I’d previously sought him out to ask why he’d left the project so I knew he’d be responsive on Twitter. A week later he handed me the keys. I then went to 2 of my oldest PRs, merged them in, and did a release. I was the new maintainer.

Part 2: mui-datatables 3.0

Gabriel briefly returned to wish me well. He told me his time had become scarce, and he simply didn’t have enough of it to spend on the project. Even though I’d felt a lot of frustration with some of his decisions, we’d been friendly and I appreciated the work he’d done. Had he not taken on the project, it most likely would have died. However, a year had passed since he’d made his announcement about the next version of the library. Was a version 3 still on the table? I’d done a lot of work on my fork and there were a lot of PRs. Maybe there was enough new stuff to justify a big release.

For the next two weeks I poured through the PRs and open issues. It was oddly fun and proved to be an eye opening experience. Some of the stuff people submit is completely nonsensical while other things are highly complex and clearly had a huge amount of work put into them. For example, one person had rewritten the whole library in TypeScript, and while this was a neat idea, it was completely impractical and would have made it almost impossible to merge the other updates in (plus, I’m not completely sold on TypeScript – but that’s another story). On the opposite end, there were some requests that were small with little explanation. Often times they didn’t work at all and/or caused the tests to fail. It was like people submitted their work without checking it.

I tried to be nice. After all, each PR represented someone spending their own free time to better this library. At the very least they deserved someone trying out and reviewing the changes that they made – even if they had to wait a year. And to my delight, people were pretty cool. I either got a positive response thanking me for looking at their PR, or no response at all.

Of the 60, I ended up accepting 23. The vast majority of these were bug fixes and minor updates. The only submission that really fell into the “cool new feature” category was one that added an injectable component feature. During this period I also ported over most of the updates from my fork, which in the end, accounted for all but 5 of the new features/API updates. A thorough review of the code base was also done which cleaned up a hand full of anomalies. For example, most of the deprecation warnings had accidentally been disabled in version 2.14.0, and starting in version 2.13.1, a large 5MB file was accidentally being included in the npm package. No one seemed to have noticed these things though.

I also updated the library from using version 3 of Material UI to version 4. In the year that had passed, most of the Material UI community had moved on to version 4. Not updating the library to correctly work with version 4 had probably hurt adoption. When I had parted ways the previous year, mui-datatables had 15k downloads per week on npm and it’s closest competitor, the feature heavy material-table, had 7k a week. Now the tables had turned. Material-table was crushing it at 80k downloads a week to mui-datatable’s 25k. The version issue most definitely wasn’t the only reason mui-datatable had lost ground, but I have to imagine it was a significant factor in people’s decisions.

In the end, version 3 would be no great revelation, but it would be a step forward and hopefully a step in the right direction.

Part 3: The Rise and Fall of a Maintainer

After it was released a few people chimed in to say thanks and report bugs, but there was no big celebration that the library was back to getting updates. I got the impression that many people using the table had built their apps around it a year ago. It didn’t seem like it was attracting a ton of new users. From github stars, I could tell that on average, 1.35 new people were starring the library a day, which seemed a bit lower than it had been in the past.

To get things going again I decided to start work on two features I felt were essential:

  • A cell rendering method that would significantly boost performance.
  • Draggable columns.

I had a soft spot for draggable columns. It had been discussed with feverish enthusiasm when I was originally looking at the table, and I felt like it would be poetic as the first big feature of the 3.x era. I wanted something that was nice though, I didn’t want something cheap looking. So I got to work and created what I felt was a slick draggable column feature:

click to play or pause

Most people will say they don’t care about small little effects like this – that all they really want is the functionality, but over time I’ve found this to be false. Little touches like this add up, and overall lead to users liking a product more. 

As I prepped a new release, I began to talk about new features with the few of the people still hanging around. Maybe a grouping feature should be next? Editable columns? I was excited. I was going to get this table back on track and soon it would have features that rivaled material-table. But in addition to the lack of activity in the repo, something else bothered me: Surely I couldn’t be the only one who felt datatables for Material UI were lacking? Hell, when I was doing AngularJS there were several great community options. What was going on?

I went digging, and found my answer on the material-ui github repo. In a thread lambasting material-table, several people stated that they weren’t happy with the community options. The co-creator of Material UI, Olivier Tassinari, satiated the criticism by ensuring them that they were hard at work on an official datatable component. It would be ready for preview in September – basically at the end of the summer.

I had been out of the loop, and though it seemed obvious, apparently the community was displeased with both mui-datatables and material-table. The creators of Material UI realized they needed an official solution, so in October of 2019 they’d announced plans to create one. That explained why no other table had come forth, and it made me feel like mui-datatables and material-table were both lame ducks. A good solution was on the way, and there was no point in a community project if an official table was going to be supported. (however, I would later learn that certain parts of this official table were for paid users only – so material-table and mui-datatables would still have a place in the future)

I was a little distraught, but decided to continue work, albeit at a slower pace. Then, after 10 weeks of being at home, I got the call to come back to work. I reintegrated mui-datatables into work my project and showed off some of the new features. My team lead seemed impressed and was thankful we were no longer using a forked project. With work on mui-datatables now restricted to evenings and weekends, my contributions to it slowed even more. Then, one day in late September, I handed in my resignation at work.

Wait, what?? Oh yeah, probably forgot to mention that during my time off I was kind of stressed about my future. My work had been extremely generous in paying us to stay home and do nothing, but there were rumors about leave without pay in the Fall. With no telework option available and no assurances made about what might happen once the leaves started turning, I had decided it was best to hedge my bets and look for another position.

In a twist of fate my new job would involve working on another AngularJS 1.x project (it never ends!) and possibly porting it to React (though as of now that hasn’t happened – my guess is we’ll still be dealing with AngularJS 1.x apps 10 years from now, though that’s a topic for another time). Now I had even less motivation to continue work on mui-datatables. I didn’t want the project to fall back into disarray, so I felt like the only reasonable thing to do was to find a successor. Thankfully during my time as maintainer another developer, Woo Dohyeong, had joined me in my quest to better the library. He was the obvious choice to take over and with Greg’s blessing, I passed the torch. After Woo made his first release I stepped back.

It was bittersweet. Part of me knew I didn’t have enough time to be a maintainer forever. It’s a job that gobbles up the extra minutes of your day and its mostly thankless. I didn’t talk too much about it above, but there were a few days where I would handle half a dozen questions, and the majority of people wouldn’t say thanks or respond, some people would even be rude. However, reviving the table and improving upon what so many others had built was rewarding. There was a sadness in stepping back, but it was for the best.

Final Thoughts

Well, I didn’t expect this to be so long, but the story (even trimmed down) turned out to be much longer than I thought it would be. I needed a place to write it down though, and if you read it, thank you for reading my story. The summer of 2020 was a crazy time, and even this bloated blog post barely scratches its surface. Hell, I didn’t even write about my bike rides through empty streets or obsession with Hollow Knight (and Animal Crossing, and Cuphead), not that those things are in any way relevant to mui-datatables, but they filled the gaps between development. Anyway, hopefully you found this entertaining or enlightening. Next time you use a piece of open source, be sure to show appreciation to the maintainer, and don’t be afraid to contribute yourself. If you have the time it can be a fun little adventure. Also, don’t get too caught up in the endless cycle of front-end rewrites. There’s a certain madness to it.

New patrickgillespie.com

2021 Update: I’ve since re-done the site again, when I get a chance I’ll write something up on it. The post below refers to an older design.

***

Another redesign, this time a good one, I swear! I decided to learn ReactJS, and for my first project I redesigned my photography website, patrickgillespie.com

My previous design was pretty bad, I guess no one had the heart to tell me. For this new one I studied several other designs and took elements I thought worked best. It’s simple, responsive, and the code is up on github if anyone is looking for a portfolio template.

patrickgillespie.com had been pretty dead for about a year, receiving an average of 2-3 visitors a week, probably bots. The domain was about to expire earlier this month, when I noticed a big uptick in visitors leading up to the expiration day (~20 a day). Were these other Patrick Gillespies eyeing the domain? Domain vultures looking to scoop up a site that was about to fall into the abyss? I guess I’ll never know.

patorjk.com, on the other hand, has been alive and kicking, at least visitor wise. No one reads this blog, but the apps on the site see hundreds or thousands of visitors a day, making me feel sort of bad about neglecting the site. It’s like a rudderless ship that has somehow managed to successfully sail the ocean.

Anyway, if you’re reading this, I hope you enjoy the new portfolio site or at least find some use in the code. The most useful piece of it is probably the create-image.js script, which creates multiple sizes of a photo and extracts it’s exif data into a JavaScript object which the application can use.

Using Dithering to Create Old School Gaming Filters

Recently I’ve been reading up on image dithering. It’s kind of cool. It’s a way of transforming an image into a smaller color space while still simulating as much color depth as possible. It has lots of practical uses – printing, displaying images on screens with limited colors, etc – but I realized it could also be used to inject images a heaping dose of gaming nostalgia. I couldn’t resist doing a quick proof of concept, so I’ve created a new web app that will transform an input image into what it may have looked like on an old gaming console by way of various dithering algorithms. For example, below you can see what I may have looked like on a Game Boy.

As you can probably infer from what I said previously, the app is actually pretty simple. All it’s doing is resizing the image and applying an image dithering algorithm to it.

There are various image dithering algorithms, but most work by trying to intelligently add noise to areas of an image where it doesn’t have the right colors in its reference palette. This noise can help simulate the missing colors. One place you may have seen image dithering is in gif files, which typically have small color palettes. In fact, the noise you see in many gif files is commonly due to the dithering algorithm that was applied to it. Without dithering, these files would suffer from color banding, which can be pretty ugly. In the video below you can view comparisons of what undithered vs dithered images look like given color palettes of various sizes.

As you can see, dithered images are able to look much better with far fewer colors. For the new app, I chose to use the popular Floyd-Steinberg dithering algorithm for most of the results (and used the wonderful RgbQuant.js library for this), and also used Ordered Dithering for a few other cases (I couldn’t find a good implementation of this algorithm, so I just wrote my own version based on what I read here). If you play around with the app, you’ll notice that the results with ordered dithering are much worse. I only included this option because it appears to be what the Game Boy Camera used. I’m not 100% sure on this, but the cross-hatch pattern found in many Game Boy Camera images is indicative of ordered dithering.

I should also note that while the new app can give you an idea of what an image may have looked like on an old gaming console, some of these filters wont be exactly 100% true to life. For example, the NES is capable of displaying around ~54 different colors, but for a single sprite it could only display 4 colors. So it wouldn’t really be realistic to expect an NES to display an image like this:

For it to display that image, it’d have to be composed of a bunch of tiny sprites that overlapped, and I’m not sure if there would be any limitations when doing that. So these filters are really just best guesses, and mostly just for fun.

Lastly, I’ve also put in an option to create output without dithering. In some cases this produces a more realistic gaming result, and sometimes combining dithered and non-dithered images can give a better result, like this Sonic image: (I combined the dithered and undithered versions in photoshop, masking in the areas I liked from each version).

Honestly I’m not sure what practical use this app can have other than maybe a few minutes of fun, but I hope you enjoy it. If you’re interested in learning more about dithering algorithms check out this blog post on the subject, it covers in detail how several of the more popular ones work.

  • Check the app out here: Old School Gaming Filters
  • Create your own filter here: Palette Swap (this was actually my original idea, but it takes a little more work and I found pre-made filters were a little easier to work with)