Wait, That's Gary Oldman?


#media

I think the first movie I saw Gary Oldman in was Batman Begins. Not long after, I was a teenager and dodged my parents’ prohibition on the Harry Potter franchise (they were right, eventually, but for very different reasons of course) I saw him as Sirius Black and had no idea that it was the same guy. And when someone got me to watch this weird nineties sci-fi movie about elements and love or something, I again didn’t recognize him as the antagonist until way later. I’m currently into the second season of Slow Horses, and it’s more of the same – wait, that’s Gary Oldman? He’s amazing!

There are actors who are good, but you can never forget who you’re watching. I’m not talking about the folks who sort of always play the same types of characters due to typecasting or whatever, or actors in comedies or action movies where there’s less breadth in characterization. It’s more that for most actors, their star power overshadows their role.

Like, say, Jeff Goldbloom. I love me some Jeff Goldbloom, and he is a suave eccentric gentleman, but it’s always Jeff Goldbloom up there. Tom Hanks is always Tom Hanks, and a lot of his movies would have done very poorly without his genuine likability. Al Pacino’s great but he’s always Al Pacino. Meryl Streep, Helena Bonham Carter, and Angelina Jolie prove it’s not just an issue with guys. John Lithgow. David Tennant and Matt Smith, as heretical as it is for a nerd to criticize a Doctor. They’re fine, or great, or astounding actors. But you never forget who you’re watching. They don’t disappear.

There’s the method actor type, the canonical example of which is Daniel Day-Lewis. A guy who supposedly goes for it so hard it’s annoying to those on set, and does two movies a decade but they’re Very Important Movies that film students will study and I will not find very fun to watch. Marlon Brando method acted too, and was obviously amazing. But he shared the same problem as Day-Lewis: even though they lived as their characters, and produced amazing performances, I can’t avoid seeing the actors first and the characters second. They lived their roles, but didn’t disappear into them.

I think Gary Oldman is the greatest actor working today. He inhabits his roles to the point where you look past the actor and just see the character. As if you’re not watching TV or a movie, and you’re reading a book instead. There are others – Karl Urban has made me do similar double-takes. But nobody does it like Gary Oldman.


The App Store is the Worst Part of Apple


#code

Just got an email from the Apple Developer program detailing a few updates to their review guidelines. And hey, it sounds like good news: in the US, developers can accept payments outside of the in-app purchase situation that's been a requirement for years. Progress! Right?

Except.

There had to be some catches. This was a result of a court order, so clearly they're not doing this willingly. And although it's now technically possible to take a credit card payment in your app that avoids giving Apple 30% of your money, it's functionally impractical-to-impossible.

You have to:

  • Still offer in-app purchases through their system (for their 30%)
  • Request and be issued an entitlement, that you need to implement in Xcode
  • Provide exactly one URL that cannot change without re-review
  • Format the button to press in a way that makes it not at all look like a button
  • Use a specific icon at a specific size as part of the button's text
  • Accept that Apple will throw up a very scary wall of text saying you're probably about to get scammed despite having checked that the URL is safe during review
  • Accept that Apple will now "only" take 27% of your money instead of 30%

Failure to do any of these things to their exacting specification will get you rejected. There are some weasel words in the guidelines, like how the link out icon size "must visually match the size of the text," which could mean anything to anyone and will definitely result in rejections. And you still have to provide Apple with a report of all your earnings each month so they know you're giving them their 27%.

This all matches what happened in the Netherlands with dating apps, where Apple decided that if they were to comply with these court orders, they'd do so in a way that made it as painful as possible for both developer and consumer.

The App Store is the worst part of Apple.

The deal with Apple, for a long time, was easy to understand. You'd buy a Thing, and that Thing would be different in some way from the things it was competing with on the market in a way that made it desirable. It was easier to use, or looked nicer on your desk, or made you look cool to use it, or had a nicer way of interacting with it, or had software that was nice to use. You'd pay more for it than the other things on the market, because it had those benefits.

But then, the iPhone got huge. All the software that got created for it was a big part of that. But Apple couldn't see that software as a benefit to the ecosystem, despite the ads. Infamously, the guy running things saw any money being made in the ecosystem other than by Apple itself as a parasitic situation. Every dollar earned by some parasite on the system, like, say, developers, would have to be fought for.

That attitude stuck with the App Store leadership, it seems. Rather than accept the situation for what it is – that only having one software vendor for a platform is unacceptable – the folks running the App Store have decided to make the march to independence from the App Store as poisonous a process as possible. They'd rather make the experience of developing for and using the product terrible than give up 30% of ninety-nine cents.

It's gross, and it's not going to stop until a judge decides that this kind of crap isn't good enough. Or maybe the right couple of old guys retire.


Inkblot Extraterrestrials


#space

Aliens are in the news.

It's not surprising. A little less than half of the electorate of the US has decided that conspiracy theory as a form of government is just fine with them, so why not? It's all fun to laugh at, until it isn't.

But I think the thing with aliens is really interesting. Whether or not you believe that there are aliens out there right now buzzing through our skies says a lot about you as a person. There are some obvious things – a lack of trust in the government, disbelief that non-Europeans could have built big things, not understanding technology so it must have been reverse-engineered – that sort of stuff.

But a belief that aliens exist and are among us is also a weird kind of optimism.

"But where is everybody?" is (allegedly) how Fermi put it. If the universe is big, and life can arise in other places like it did here, and Earth isn't the first place to develop life, then someone should be out there. Little green men aren't walking around downtown, so their absence sort of demands an explanation. Maybe we're early in the life-bearing era of the universe. Maybe every intelligent species eventually nukes themselves into non-existance. Maybe they all develop giga-computers and stop caring about the universe as they plug into the Matrix. Maybe FTL isn't possible. Maybe it's hard to do space travel and also feed everyone on your planet. Or maybe we are unique, and the professor at my college was right, and his dubious math showing life couldn't actually arise anywhere did prove it had to be intelligent design, and the people who didn't get good grades because they disagreed were wrong. But I digress.

If there's other life in the universe, but there are no alien visitors to Earth, it's bad news for humans. That means somehow every species has gotten stranded on their homeworld, and it's probably for reasons that are kind of a bummer. So, to me, the people who advocate that yes actually the aliens are out there are profoundly hopeful. We can get through the Great Filter, because someone else already did. We'll eventually solve hunger, because how would you go out into the universe if you haven't solved scarcity? We'll eventually figure out how to avoid nuclear armageddon, because check it out, they did. FTL is possible – see, they're here. We won't all just lock ourselves into a VR-island-of-lotus-eaters, becuase they managed not to.

I don't think the aliens are here.

And besides the fact that we have no credible evidence for them, I just don't think anyone can get through that Great Filter. In 1945 the whole world figured out how easy it would be to fail that test, but really, you don't need nukes for that. The World Wars kind of stopped because of nukes, and without them, we'd probably have had another one in the 50s or 60s. A culling like that every twenty years would keep us firmly planted on this planet. But with nukes, we have to worry about The Last War, where those of us who aren't radioactive vapor get to pay for bullets with bottle caps and try not to get eaten by radroaches.

Maybe nukes aren't the Great Filter. It could be something else. It wouldn't have taken that many base pairs to be different and the 2020s could have been like the 1350s. As long as the aforementioned conspiracy theorists keep getting elected, I'm not sure that won't be a problem going forward. That'll definitely keep us focused on continuing to breathe, not going to alpha centauri.

I mentioned the whole voluntary-extinction-through-Matrix thing because that's less overtly horrible than the rest of them, and (given it would be possible) I can totally imagine us doing that. Surveillance Capitalism is a lot easier when your whole mind has cookie tracking on it. Or maybe it's boring, and we just exhaust the resources of the earth as we create a lot of value for shareholders. Not every Great Filter has to be violent.

My guess? We'll eventually get caught in a Great Filter, just like everyone else, and that's why there are no aliens, and it's probably because of a thing we don't even know about yet. It's pessimistic, I know. I'd like to be proven wrong. But the types of folks that just insist that they exist don't give me a lot of confidence.


Discovery with a Computer Isn’t Discovery


#chemistry

I've talked about this sort of thing before, but here we go again.

Computational models in chemistry are cool and useful. They predict and explain things in a way that's difficult or impossible to do at the bench. But I take issue with the title of this recent paper in JACS: "Computational Discovery of Stable Metal–Organic Frameworks for Methane-to-Methanol Catalysis" (emphasis mine). The authors have done no such thing.

This paper describes a workflow where a database of MOFs is mined for this-and-that feature, and computational methods are used to predict which ones would be good catalysts. Some of them are probably good catalysts, according to their DFT models. Discovery!

Except they didn't actually do anything. There are no turnover numbers or yields or anything like that becausd they didn't run any reactions. They didn't find a collaborator to run any reactions. They suggest in the manuscript that, well, someone should try these things. But it seems like they consider the matter closed because their models say it should work. The folks at JACS agree, I guess.

That isn't a discovery. It's not even a result. It's a hypothesis. One that needs to be tested before anyone can claim a discovery.

It's especially frustrating to compare it to another paper in the same batch of ASAPs. In that one, the authors look at crystal structures and try to find something that will do the reaction they want. But of course, they designed a bunch and ran them to see which worked best. I'm sure their models told them which one would work best. But then they went and did the thing. Made them and tested them. Optimized them based on results. And in the end, isolated 35 mg at 93% yield, 99:1 dr, 3.5:96.5 er, and 5000 catalyst turnovers. Now that is some science.

Modeled, predicted results aren't results. They're hypotheses.


Attention Grabbing TOCs


#chemistry

Sometimes a header graphic comes up in the ASAPs and I have to shake my head a bit. Like earlier this week, when someone wrote out their N-dealkylation catalyst as (Ir[dF(CF3ppy]2(dtbpy))PF6 instead of just drawing the thing out. It's ChemDraw, not ChemWrite, you know.

But man, one that popped up today was great. High-energy compound chemists are something else. And apparently, their graphics concepts are hard to beat:

Striking.


Crossroads


#life

It's good to be back in Indiana. New job, old friends. Living in the city rocks.

Getting to Indiana was bad.

No direct flights from San Jose, so we flew through Denver. A two and a half long layover would be rough, but it would give the cats a chance to stretch their legs in an animal relief room. And Denver isn't so bad as airports go. Decent food options, and it wouldn't be busy on a random Wednesday.

But wow, is Denver windy.

It was when we were circling for longer than usual that I started realizing there might be an issue. We actually made one attempt at a runway but pulled up while still a few thousand feet up. As it turned out, wind shear at ground level was grounding all flights out and preventing any flights from landing.

The captain, with an apologetic tone, told us we were running out of fuel and would divert to a nowheresville airport in Nebraska to refuel and wait for the wind to calm down. It took about two hours. By the time we made it back to Denver, our connecting flight to Indy had just left.

So now we have two nauseated cats, and one nauseated Lucas, from the turbulance. No flights to Indy until the next morning. And a customer service line that wound through the whole airport, since everyone else had missed their connections too. But I managed to get ahold of an agent using the fancy iMessage customer service thingy, and proposed an option to him: we flew to Chicago.

The in-laws let us stay at their place in the Chicagoland area for the night. Taylor drove all the way up from Indy and picked us up the next morning. Hailey let us borrow pillows and blankets. Taylor helped me retrieve our luggage, some of which had made it as far as Montana, that had eventually been delivered to the Indy airport a day later. Then he let us borrow some plates and bowls and stuff as we wait for our household stuff to be delivered. We were tired, but we made it.

We had a very bad time getting here. But that experience reminded me of one of the biggest reasons I wanted to be in Indy: the people. There are good people here. I've missed them very badly.


Feeling Dumb for a While


#code, #3d-printing

It's always tough to learn a new set of tools. It's especially tough to learn your second set of tools. The first was the only way to do it for a long time, after all.

Doing the full conversion of NMR Solvent Peaks from UIKit to SwiftUI was one recent example. It took me most of a summer just to get the flow right. Easy things became hard, and it was frustrating. I bounced off of it more than once and just figured I'd go back to UIKit. But I got the hang of it eventually, mostly by just trying to make the thing I wanted to make instead of a million sample projects and tutorials. Every time I hit a snag, I'd search through Stack Overflow or Hacking with Swift and get an answer to that particular issue. There were many such searches on the first day. There were fewer over time.

Eventually, those easy things that had become difficult with the new tools? They were easier than before. And problems I avoided because of the complexity? They were within reach now.

The same sort of thing is happening with me and 3D modeling. I've been using TinkerCAD for a long time to design things to 3D print. But it's pretty inelagant. Just about everything is a combination of basic shapes and their intersections. Now, I could build some pretty cool stuff with TinkerCAD. Two of which I'm proud enough of to publish on the internet, and it's really satisfying to see people print them for themselves. But there has always been a huge shadow hanging over all of these designs: what if I put on my big boy pants and used Fusion 360?

Well, I had tried Fusion 360 a few times already. Each time, I bounced off of it. Simple operations in TinkerCAD didn't seem possible in Fusion 360. And years of using TinkerCAD had trained my brain to think of 3D objects as constructions and combinations of simple shapes. So the entire design system of Fusion 360 was foreign to me.

But just like with SwiftUI, once I sat down and gave it a real try with a real project, it turns out that those operations were actually possible. And with a bit of time, easier than in TinkerCAD. I just had to rewire my brain a bit, and be willing to feel dumb for a while. But here's the best part – now, things are possible for me in Fusion 360 that were absolutely impractical in TinkerCAD. Pulling in faces a bit for fit tolerances required completely rebuilding a part in TinkerCAD. In Fusion, it's like four clicks. But then extending one face in a 3D trapezoid pattern to fit in a similar 3D trapezoid hole in another part? More or less impossible in TinkerCAD. In fusion, again, it's like four clicks.

Learning new tools stinks. It would be so much easier to just get my work done with the old tools. And I have to feel dumb, despite being good enough with the old tools that feeling dumb felt like a distant memory. But most of the time, once I figure out how to solve the old problems with new tools, I realize that there are new classes of problems I never even considered solving because the old tools couldn't handle them. It's not just about solving old problems faster, it's about rewiring my brain to find new ones to solve.

I just wish it didn't stink so much at first to feel dumb again.


A Necessary Addition to the Style Guide


#chemistry

Drawings of chemical structures are one of the most important ways (especially organic) chemists convey information. And while everyone has their style, there are certain conventions that are universal.

And while I think that nitpicking Chemdraws in presentations is not always constructive, when it comes to publishing, I think there is usefulness in having a style guide. At the very least, drawn structures that are consistent within a paper make taking in the information frictionless. Consistency within a single figure, I think, is even more important. Take this reaction that I ran a bunch of times in graduate school:

A double methylation reaction, with some problems

This drawing isn't technically wrong. But I've rotated the whole thing around. I've drawn methyl groups two different ways. I also abbreviated the carboxylate in the product but not in the starting material. It would be a lot more clear what's going on if I had "locked in" as much of the structure as possible so the most obvious things to stick out were the changes, and if the abbreviations were consistent.

A double methylation reaction, but better

Better, right?

This argument has little to do with the asthetics of the above structures. For instance, I really don't like "hanging stick" methyls, like on the ether in the first drawing. But many use it, and it's not really wrong. The actual problem here is clarity and consistency. There wasn't a reagent that looks like "–– I" in the starting materials, so why put it in the product? And there's one "H3C" in the product that works all right, so why not be consistent with the methyls? The point of these drawings is for the easy conveyance of information. I think the top drawing does not do a very good job at that.

This brings me to a paper that dropped in OPRD today. It's a good paper, with great chemistry, and I'm not even going to link to it because this criticism isn't for the authors – it's for everyone that draws structures and puts them on (virtual) paper. But look at this little slice of the header graphic:

inconsistent chemical drawings

If I were to re-draw this, I'd rotate the starting material 60° counterclockwise to show that the starting carboxylic acid hasn't changed (besides being made into a sodium salt). That change makes it immediately clear what's happening here – an SNAr followed by some sort of electrophilic sulfonation reaction. Also, they've abbreviated a methyl two different ways – as "Me" and as a hanging stick. Not so bad, really, but it feels like two different teams worked on both halves of this molecule.

Again, this isn't an argument about what looks nice. We can argue about what looks nice, and nice looking Chemdraws look different to different people. This is about clarity and ease of information transfer. Keep as much about your starting materials and products unchanged as is practical, and let the changes stand out.


App Store Conspiracies


#code

I'm really not a fan of Apple's App Store policies. This is not a revolutionary statement. Ever since the App Store launched, the 30 percent don't-call-it-a-tax has felt pretty crappy to anyone developing on the platform. But all of that cash can't seem to fund decent App Store review infrastructure, with good apps rejected all the time for no reason and absolutely terrible apps making it through all the time. How can they be raking in billions of dollars, mostly from gambling apps for childeren, and not be able to pay people to properly filter out the garbage?

But, then (Christopher Atlan, via Michael Tsai):

My sources tell me Google has successfully inserted provocateur agents inside Apples App Review team. They are exceeding their goal to discourage indie devs, making these remarkable apps for the Apple platforms.

So this kind of conspiracy theory stuff is probably wrong. But it's believable, kind of. At least I want it to feel believable. I'd certainly prefer that the App Store to be a good place to distribute software. But it's not. As to why, it's easier to imagine that a few bad folks are ruining it than acknowledge that the whole place might be rotten. You can get rid of a few bad people.

But, really. Is App Review being torpedoed by secret Android fans on the inside? I think the simpler explanation is probably right. The incentives for App Review being terrible just outweigh the benefits of fixing it. Thirty percent of gambling apps for children, scams, and all of that adds up to a lot.


Another Blog CMS


#code, #meta

I haven't written anything here in a bit, and it's again because I've been fighting the tools.

I've gone through and converted this blog to use the Grav CMS, with a heavily-modified version of the Hypertext theme. I like it a lot more than previous attempts:

  • My first, from-scratch, git-based CMS. It did exactly what I told it to do, which was great but also sometimes a problem. Not having a web backend stunk.
  • The second, Automad-based system. Automad is great. But I couldn't bend the theming system to my will, and persistent permissions issues on the server prevented me from really using it. Likely PEBKAC but I couldn't get it to work.

This system feels better. Grav uses the Twig templating engine, and as I've made edits to the theme, I find myself filling the site with little curly braces. It's odd to have Twig get processed down to php, which then generates the actual html. You'd think it's not super efficient, but Grav also implements decent cacheing, so it works out all right.

I really hope this one works out. My other options are less desirable. There's always old reliable.


Twitter, Mastodon, and This Blog


#media, #meta

It's been a weird few months on the internet.

Elon bought Twitter, and started doing the sorts of things that a person like him would do. I initially stopped posting, and eventually stopped going there altogether. I don't miss it.

Part of why I don't miss it, though, is Mastodon. I wasn't on Twitter in the early days, but I am told that Mastodon currently feels like that. I really like it. Not having quote-tweets is a big part of that, as well as the algorithmic timeline that rewards outrage, makes it a much more pleasant place.

But jumping onto Mastodon has a cost – I basically don't write here anymore. I built this website so I'd have a place to put long-form content on the internet, and so I'd have a sort of home base. But my publishing workflow is pretty clunky. I write a text file with some metadata at the top, commit it to a git repository, then pull the changes on my server. Then I visit a special url, type in a code, and some php chews on all the text files and spits out a directory structure and html.

That works fine if I write stuff that my text parser can understand. But if it spits out garbage, or I need to fix typos, the whole thing falls over. It's not robust. I have resorted to writing drafts when ideas occur, then firing up a local copy on a LAMP stack on my home Mac to make sure there aren't any errors, and finally doing a dulpicate "real" publishing workflow. It's not exactly friction-free.

So instead, I just fire up one of several Mastodon apps on my phone and post something there. Easy edits, no worries about text parsing, and a small audience sees it. A 500-character limit and pre-posting threading makes long-ish form stuff easy, too.

So what do I do here? I tried hosting my own Mastodon instance so I'd own and control my own stuff, and gradually call that my micro-blog. But it's heavy enough software that I don't care to increase my server hosting costs by a factor of 2-3x just for a micro blog. Write Freely is another piece of software that can be self-hosted and implements ActivityPub, but I couldn't get it to work well on my server.

So maybe I install WordPress again. It's fine, I guess. But in earlier iterations of this site, it gave me nothing but headaches. There are other, more lightweight cms's out there. Might give one of them a shot.

In any case, this may be the last post using the current system. Building it was a fun project. But my day job isn't in software, and I don't have the skills or time to dedicate to making it actually good. It's probably time to rely on the professionals doing what they're good at.


New Dungeons and Dragons Rules


#media

So Wizards of the Coast released their first play test material for the next version of Dungeons and Dragons. The first batch of stuff is on character backgrounds and races. I think the new rules are overall good. Not perfect, but pretty good. Based on vitriol on the internet, I might be in the minority, but the angry folks are always loudest. A bunch of things, on first glance:

Making ability scores a) tied to background and b) completely flexible negates the mechanical requirement for a bunch of old-fashioned race-stat combinations. Not all elves are slender braniacs, and not all orcs are big brutish tanks. This change felt pretty obvious after Tasha's came out last year, but it's cool to see it codified.

Half-races just being mechanically one or the other is kind of a bummer. But creating a system to pick and choose aspects from two races to combine (like how the half-elf and half-orc already worked, more-or-less) would be really awkward.

Nowhere in any race description does the document even suggest a predetermined alignment. The lone exception may be with Tieflings, with "for better or worse" having ancestry in nasty things, but they explicitly say it has no effect on moral outlook.

The concept of completely customizable backgrounds (with some pre-generated ones if you don't want to) is very good. I basically did that already and reflavored the official ones to get close to what I wanted for a character.

I'm not sure about the Ardlings. They feel like de-buffed Aasimar, which puts them more in line with a celestial-flavored Tiefling. I guess that's the point. I don't care for the animal head thing.

A flat 50 gp budget for equipment feels fair, but for a brand-new player, a "this or that" choice was way easier. That said, your rogue can now buy studded leather, two daggers, and a bag of ball bearings on day one and be pretty much set for the campaign.

Combining Magic Initiate into three lists (Arcane, Divine, Primal) is very good. Especially for the Primal list – it feels like Rangers got some caster representation. But I imagine an Ancients Paladin or a Scout Rogue with appropriate skills, Thorn Whip, and Hunter's Mark would be a better Ranger than most Rangers.

Getting free Inspiration on a 20/d20 is fine, but the DM's I play with aren't stingy with it. One suggestion I saw was to give Inspiration on a 1/d20 instead to even out the bumps, which feels nice.

I have mixed feelings on the "only player characters crit, and then only on weapon dice" thing. Seeing a full-HP low-level character get massive-damage killed in one hit from a crit feels bad, but also makes for good stories. And you never feel more powerful than when you crit-Smite an undead creature as a Paladin. Bursty damage is hard to design a game around, but it's usually pretty fun in practice.

I do not like "all nat 1's are fails" thing at all. If you finagle a +12 or whatever in a skill, you should basically never fail at that skill check.

...

I did not intend on writing that much about Dungeons and Dragons. I guess I have a lot of ideas. But I would never want to actually be in charge of this stuff. There are a lot of angry people on the internet this week who are pretty upset about their play-pretend-with-math game. Some people stuck with 3.5e or 4e, I'm sure some will do the same with 5e.


An Ode to Delta IV Heavy


#space

The final Delta IV Heavy launch from Vandenburg just happened, and it kind of makes me sad to see it wind down – especially when comparing it to SLS. There are plenty of things to complain about relating to SLS, but two stand out when comparing to Delta IV Heavy: the engines and the boosters.

Its main RS-68 engines are derived from the Space Shuttle's main engines, but are intended for a single use, and are simpler and less expensive as a result. The heavy variant has three nearly-identical cores in the first stage, with the outer two serving as boosters. The RS-68 sheds much of the complexity of the RS-25's by cooling the nozzle ablatively, which you can do when you only need to use it once.

Compare that to SLS. If it ever gets off the ground, it will result in four flight-proven, re-usable RS-25 SSME's being thrown into the ocean as garbage. An earlier iteration of the Shuttle-derived booster concept did use a few upgraded RS-68's, but they decided the engineering required to manage the interactions of the RS-68's ablative nozzles with the solid boosters was too difficult to overcome.

And yeah, the boosters. SLS uses Shuttle-derived solid rocket boosters. I think this is really bad for everyone that doesn't work for Northrop-Grumman. Humans should never fly on a rocket that uses solids as part of its primary propulsion. Obviously, there's Challenger. That failure was predicted and ignored by managers suffering from go-fever. But in the leadup to SLS's predecessor program, the Air Force determined that in some phases of flight, a solid rocket failing would have a 100 percent chance of killing the crew due to the escaping capsule's flight through a shower of burning fuel. Watching a capsule full of crew successfully escape a failing rocket only to lose its parachutes to burning debris would be heartbreaking.

Delta IV Heavy doesn't use solids for primary propulsion. A failing side- or core booster would be really bad, of course. But it would probably result in an explosion that isn't followed by a persistent fire. By the time an escaping crew came back down through the altitude where the failure happened, the propellant would already be done burning.

Okay, so how about the upper stages? I'd compare them, but SLS uses Delta IV's second stage in an almost unchanged state. Of course there are plans to upgrade SLS's upper stages in the future. But that requires it to fly more than once or twice. We'll see.

Crew-rating the Delta IV Heavy would take some effort. NASA did a study (pdf) early in the Constellation days and said it could be done. It would require some new software, avionics, and process changes in manufacturing. The biggest change would be to modify the second stage. Here's the thing – that modified second stage could be done, because that's what the SLS second stage is. Although I'm not sure how I feel about Boeing's ability to safely implement hardware and software changes without cutting corners.

In the end, I'd feel a lot better about a crew-rated Delta IV Heavy than SLS. Sure, it's expensive, something like $250 million a pop. But SLS is like eight times that. And you have the solid failure debris problem, and the bad feeling of throwing SSME's into the ocean. And while I think the safest crew-rated rocket on the planet right now is Falcon 9, I'd much rather climb onto a Delta IV Heavy than SLS.


Really Nice Chemdraws


#chemistry

I really enjoy the figures that the Nicewicz lab produces for their publications. Scrolling through this page, it's clear that there's a distinct style there that I don't see in other research groups. And you don't see them in the MacMillian or Johnson group publications, so he didn't pick it up from a former PI.

If you scroll down far enough, you can see that most of it is there from the beginning. But the font took a while to show up (the 2015 JACS paper at 18 doesn't have it, but the Science paper at 19 does). There's a transition period for the font, then it's really consistent onward.

There's probably some subtle factors that I'm not seeing, but to my eye, there are a few things that make the Nicewicz lab figures stand out:

  • the font face and relative thinness compared to the bonds
  • liberal use of stereochemical wedges and dashes (especially on the acridinium catalysts)
  • a consistent use of specific red and blue colors for emphasis

I think I have some ChemDraw settings that more-or-less replicate the Nicewicz style as much as I can manage. The version of the table of contents graphic for this paper on the Nicewicz website includes some color swatches that the one on the Synlett site doesn't, and that helped a lot to nail down the colors.

  • Futura Medium, size 10
  • 0.016 inch line width
  • 0.032 inch bold width
  • 0.2 inch fixed bond length
  • Red color: 186, 6, 6 rgb (#BA0606 hex)
  • Dark blue: 55, 6, 123 rgb (#37067B hex)
  • Light blue: 118, 146, 183 rgb (#7692B7 hex)

(my attempt at a Nicewicz-style drawing)

Although I still can't figure out how to get the acridinium catalyst to look right. Bolding the correct bonds and wrangling it into the right orientation with the Structure Perspective tool in ChemDraw gets it close, but the double bonds get a little squished on some of them. And I still think the font isn't quite right. Maybe there's a Futura Light that they use that I don't have installed.

In any case, I wish I could make figures that look as good as Nicewicz's without obviously copying the style. Mine aren't too bad, I guess. At least I'm not [putting] [every] [word] [in] [brackets].


Limitations


#chemistry

Really cool paper out in JACS yesterday detailing an electrophilic aromatic substitution reaction that works on really electron-deficient rings. With some cheap pool chemicals, a pretty simple benzenesulfonic acid catalyst, and the world's most magical solvent, you can stick a bromine on some pretty tough rings.

But as interesting as that is, my favorite part of the paper is the portions of the figures that list the method's limitations.

There's an obvious incentive in a methods paper like this to market the method in as positive a light as possible. A huge percentage of papers probably leave out results that didn't turn out well because it makes the method look weaker. But when you do see this sort of thing, it's on really high-quality papers – and it's pretty likely that you can trust the method to work on the first shot.


The Beginning of NSP 3


#code

I've begun the great migration of NMR Solvent Peaks to SwiftUI.

I have looked into porting the iOS / iPadOS version a few times, and always hit hard walls. Not being to place certain views where I wanted was the primary problem, but there were others. But three things have changed:

  1. A redesign of the main interface renders many of the issues I had before moot
  2. I've worked around some of the SwiftUI problems to put things where I want them
  3. The rest of the problems aren't bad enough to warrant a workaround (read: a pile of hacks)

To my surprise, I could have done this all last year, since none of what I've written (so far) requires the iOS 16 improvements to SwiftUI. The main thing motivating me is the message from the WWDC '22 Platforms State of the Union: AppKit and UIKit will be around for a while, but they're old news.

I'm not sure it's the same sort of transition as Carbon to Cocoa back in the early Mac OS X days. Carbon absolutely went away for good in Snow Leopard, but SwiftUI "compiles down" to AppKit and UIKit components in the background. The only way those frameworks are going away is if SwiftUI actually runs natively on their platform, and I don't think that's happening any time soon.

I'm only partway through the porting process. But the underlying model code doesn't need to change at all (it's still Swift, after all). Porting over the multiplet drawing code wasn't as hard as I thought it would be. And the overall design for the main interface is starting to come together. In the next few weeks, I'll see if I can get it across the finish line. Hopefully it'll be ready when iOS 16 drops in September or October.


Re-Listening to Podcasts from the Dark Times


#media

A good chunk of my job involves standing at a fume hood, mostly by myself, and doing stuff with my hands. Early on in grad school I started listening to podcasts, and they're great for when you're doing extended solo work.

I kind of only listen to two types of podcasts. There's the "two or three friends just talk about stuff" type and the "one person tells you about historical stuff" type. Sometimes the first type has a vague outline but it's better as it becomes more freeform, because the show is really more about the people than the topics. The second type is effectively a history audiobook that is released a chapter at at time. I know I'm weird, because the more popular interview shows and the murder documentaries and public radio essays don't really do it for me.

There aren't really even that many examples of either type that I actually enjoy, so I end up listening back through favorites. Usually this entails downloading episode 1 and just going through it in chronological order. I'm currently going through a Roderick on the Line re-listen, which is a perfect example of the "just some friends talking about stuff" type.

Podcasts are especially nice when the world is terrible and you just want to hear cool people talking about cool stuff. The problem with the more free-form types, though, is when you get to certain points in time. Say, November of 2016, when a little over 50% of Americans just sort of had to sit and blink, or panic, or plan a move to Canada. If I'm doing a podcast re-listen, I sort of have to skip a few months.

The other, obviously, is around mid-March 2020. We all had to start taking the wrong kind of vacation. It's pretty wild to think how (most of us) reacted to a few cases here or there. Compare that to the current, "who cares lol" wave of mid-2022, where there are something like an order and a half magnitude more cases than early 2020 despite a decent vaccination campaign (at least in California) but many people seem to think that everything's fine. But I digress.

It's been more than two years, so the panic at the start of the current unpleasantness is kind of history now. But it's hard to listen to people say "well, we'll get through this eventually" because we really haven't yet.

So, maybe two periods of podcast re-listening time to skip now.


Models and Malarky


#chemistry

Every chemical engineer loves a good computational model. You have a reaction stream, you add some stuff, and would you look at that, there will be a minor exotherm. You'd better cool it down a bit before adding the stuff and be sure to add it slowly. That'll add some capex and time costs, but that's what the budget is for.

Models are great. But they usually need verification in the real world.

I've been called a radical empiricist for this view (not that kind, or that kind). But I don't think it's unreasonable. Calculated models are not the answer. They're the question.

This came up at work lately with a predicted exotherm on mixing two solutions. The modeling software spat out a nearly 20 °C exotherm, which was ridiculous given the components. The modelers asked for help, I measured it in the lab, and it was actually a 2 ° endotherm in a flask. The hunt for software bugs begins.

But a lot of times, when it's more subtle, that kind of result is overlooked or ignored. At best, you end up over-provisioning your equipment and wasting your budget. Or maybe your timeline gets pushed back because you have to source a beefier part. But at worst, well, kaboom.

It's the same problem as with organic reaction mechanisms. Nature is complex. We can't yet accurately simulate every bit of the universe down to the boson and quark or whatever. So we compromise and make models. Usually they're pretty good. Usually.

So verify your models. Maybe it's my bias as a process chemist, but at the end of the day, it's my job to make sure product gets into the drum. When the equipment is designed and built wrong because the model said it was fine, it makes that job a lot harder.


Version Control


#meta, #code

I think it's finally time for me to learn git.

I have been writing code that goes out into the world (starting with NMR Solvent Peaks) for about five years now, and I'm not proud to say that I haven't been using any form of version control. Nothing I do is complex enough to really need it, and it's just me.

But that's not sustainable, obviously. If I'm ever on a team of people working on a project or if I don't trust my edits (and boy howdy should I not trust my edits sometimes) I'm going to need to get good at it.

So the plan is to put the contents of this blog under version control. The code should definitely be under version control, but I'm also going to use it to send new blog posts to the server. Regenerating all of the html files will start by pushing down edits and new files, then parsing and writing html. I'm going to host the repository at Github for now, since it's there and I can write (or paste) right in their text editor.

And if it all blows up, well, hopefully I can roll back.

Edit, some time later:

This post, and this edit, were both pushed to the server by using git. Success!

Edit 2, in January of 2023:

This system turned out to really stink.


Deciding on a URL


#meta

It's very difficult to choose a website url. But maybe the most important part is picking the .tld, because it says so much (intentional or not):

  • .com: what you really wanted, but it's $6000 (renews at $12/year, what a deal)
  • .net: relatively cool genX person blog
  • .me: relatively cool millennial blog
  • .co: fishing for typos
  • .tv: Youtuber not from Tuvalu
  • .fm: podcast not from the Federated States of Micronesia
  • .biz: big corporation internal site or SEO spam
  • .io: either a node package or a crypto bro
  • .xyz: definitely a crypto bro

Why Sparteine?


#chemistry

I always scratch my head when papers like this one show up in the ASAPs. Sparteine is a weird chiral ligand. The (+) version is comparatively inexpensive ($18/gram right now at Oakwood if you buy 10 grams), but the (-) version is infamously expensive or impossible to source ($595/gram right now at Oakwood). The paper in question only mentions the (+) version, so if you want the other enantiomer, you'll need to shell out some cash or find another ligand system. I can't help but think there had to be a more flexible chiral amine scaffold to go after.


Serial, not Parallel Projects


#code, #meta

I'm at the part of this blog engine project where I think I can see all the parts that need to be built, but they haven't been built yet. This is a dangerous place to be.

I have the Xcode 14 beta downloaded and ready to go. iOS 16 is calling. New SwiftUI functionality is making it harder not to just sit down and rewrite NMR Solvent Peaks from scratch. Maybe it'll be better this time? Or maybe it'll take longer to hit the brick wall of "you can't do that" functionality.

No, I will be strong. There are still plenty of lines of echo $blah to write.


Painful Retractions


#chemistry

Two pretty rough retractions today in JACS (1, 2). Same group, same authors list, same story - raw NMRs and HPLC traces were edited or fabricated.

It's difficult to figure out how to feel about these thigs. At first, you feel kind of bad for the professor here. Their grad student faked data and now their name has an asterisk beside it.

But it's also not very hard to imagine a lab culture where the only acceptable results are good results, and not getting the right product or a good ee or a high yield is unacceptable. Professors have been known to demand results in failing projects before letting students graduate.

Not every case of data fabrication is a result of a desk-pounding PI. This one probably isn't given the level of investigation that seems to have happened before retraction.

But I bet a lot of them are.


Starting off Fresh


#meta

Here we go, post number 1.

I've tried the blog thing before, it never stuck. For a while in college I was using the old iWeb as a sort of journal, but that thing is lost to time. Later on, tried running a Wordpress blog, which actually went all right, but wasn't very exciting - save for one time when I criticized some guy's op-ed in Science, he got mad, and emailed me and my PI about it. I tried to reassure him that basically nobody read the thing, but, people-who-google-themselves will be how they are.

Chances are, this won't stick either. But this time, I'm writing it from scratch, so at least it'll be a fun programming project. Going to try the static site generator approach. Fill a folder full of text files with entries, run a command, and have it spit out a fully-formed html-only site. No comments or anything like that, which is a feature, not a bug.

I'm not even sure what to call the site. There are some options for inspiration, like qntm's Things of Interest, or Michael Tsai's blog without a name, or Daring Fireball. I'm not going to title it "untitled blog" or whatever. Probably going to end up being a cool word or two that doesn't already get a lot of search results.


First Post Please Ignore


#meta

If you're reading this, you've hit the end. Or the beginning. Yep.