Tech Tuesday: No Class Today Edition

Oscar Gordon

A Navy Turbine Tech who learned to spin wrenches on old cars, Oscar has since been trained as an Engineer & Software Developer & now writes tools for other engineers. When not in his shop or at work, he can be found spending time with his family, gardening, hiking, kayaking, gaming, or whatever strikes his fancy & fits in the budget.

Related Post Roulette

25 Responses

  1. Marchmaine says:

    Bird-plane and cuttle-sub we’re leaving our brutalist phase and entering our gothic tech phase.Report

    • Oscar Gordon in reply to Marchmaine says:

      I’d say we are going back to our gothic tech phase. Just watch some of those early flight attempt movies…

      What’s interesting to me is that way back when, we looked to nature to see how it solved problems like flight, realized we didn’t have the tech to copy nature, and figured out what we could do with the tech we had. Now our tech has advanced to the point we can start looking back to nature.Report

  2. Jaybird says:

    From TT15: “The problem with Moore’s Law in 2022 is that the size of a transistor is now so small that there just isn’t much more we can do to make them smaller.”

    I want to say that I remember reading something exactly like this in 2007 talking about how we’re going to start stalling in 2012 or so.

    I do appreciate that this time it’s different.

    Maybe we could finally get software devs to clean up their code…Report

    • Oscar Gordon in reply to Jaybird says:

      As someone currently taking a software engineering course, first we have to get CS programs to teach what it means to code clean.Report

      • JS in reply to Oscar Gordon says:

        Unfortunately, that generally requires experience to understand.

        Generally experience fixing someone else’s mess.

        To get there, you first have to learn to code, in which most students won’t be focused on (nor would really internalize) clean code — it’s hard enough to get them to write comments, and you can actually trivially grade that — instead focused on trying to understand the underlying principles.

        Of course, my definition of “clean code” might not be identical to yours — I’ve come to believe, over the years, that code should not be viewed purely in terms of processor optimization — but on maintainability and future extensibility.

        I’ll happily take a hit on an algorithm and add a few extra processor cycles to complete (nothing I make even comes CLOSE to the limits of modern hardware, so we have processor and memory to burn) if I can make that algorithm more readable and adjustable in the future.

        I do this because I’ve had to fix someone’s incredibly tight, well written, pointer-arithmetic heavy, highly optimized nested array setup. It worked well and blindingly quick — right up until you needed to extend it, in which case it all broke down and two of us had to spend a week sorting out exactly how the thing worked. There were, of course, only a handful of incredibly useless comments and the less said about the variable names the better.

        We replaced it with something that ran 10% slower, but where the internal array elements could be easily modified in the future.

        A real rock star type wrote it originally — it really was the clever sort of thing you’d find in something pushing the edge of hardware (and maybe, it being 20 years old, that was the case then). It was replaced by two decent coders with an eye towards letting anyone with a clue use it and amend it later.Report

        • Oscar Gordon in reply to JS says:

          The Clean Code I know is what Robert Martin lays out in his book, “Clean Code”. One of the things I take to heart is his admonishment to stop trying to do the compiler’s job. Write code for humans to read, and let the compiler, if it’s worth half a damn, do all the machine level optimization.

          My current migraine inducing code (I’ve mentioned it before) is a mess of 2000 line functions/methods that do all kinds of different things (violating the key “Do One Thing”), with variables named like character limits were still a thing to worry about. And of course, comments that make me wonder if the author was getting docked for each character of comment.Report

          • JS in reply to Oscar Gordon says:

            “One of the things I take to heart is his admonishment to stop trying to do the compiler’s job. Write code for humans to read, and let the compiler, if it’s worth half a damn, do all the machine level optimization.”

            heck yeah. I am 100% on board with that.

            Of course, I deal with legacy code. It’s a rare day I get the time and go-ahead to refactor something, not when there’s always new features and abilities to add and actual bugs to fix.

            I’ve got a convoluted section of code I’ve got my eye on right now. it’d bluntly be more readable, more extensible, and easier to maintain if we replaced what we have with a simple giant switch statement on our major “cases” and just defined what was, and what wasn’t, enabled for each case in that sub-section.

            Case A: Option A yes, option B no, option C yes, Option C1 yes C2 yes
            Case B: Option A no, Option B yes, Option C no, C1-C2 no…

            Instead we have these decision setups that made sense when we had 10 cases where they all fell into one of three rules. But over 15 years? 100 cases, which we shove into 6 separate “generic” rules, which are full of exemptions on a case by case basis….

            Adding new cases ALWAYS breaks in that section and half the time it takes the debugger to figure out where options you defined as off get turned on again.

            The only reason I haven’t fixed it? Haven’t been authorized the two weeks I’d need, minimum. (Two days to write it, two WEEKS to test it exhaustively) since we’re always time crunched on new features that our users pay for.

            It won’t get rewritten until it breaks so badly it’d take LONGER than two weeks to sort out.Report

          • One of the things I take to heart is his admonishment to stop trying to do the compiler’s job.

            Do they still have to harp on that? I don’t keep a copy of Elements of Programming Style or the original Software Tools on my shelf any more, but even back in the 70s best practice was (1) make it work first then make it faster and (2) if it doesn’t run fast enough, think about algorithms and/or data structures, since the optimizer is only going to get you so much.

            For a bit of time in 1978 I had the world’s fastest code for solving a particular narrow class of problem. Know how I got it? Took the box of cards from the previous world record holder and removed all of the stupid things they had done thinking it would speed things up. 10% fewer lines of code when I got done with that, 20% faster.Report

            • JS in reply to Michael Cain says:

              Algorithmic efficiency is taught a lot, and thus students tend to focus on it.

              When what you really need to take away is how to read canned solutions for a “good enough” choice (ie: Which search or sort algorithm is useful here, from execution time to ease of integrating into the current code base, to further extension) and to glance at their own code and see “Oh yeah, I’m like….looping that inner bit 5000 times when 2/3rds of it only needs to be done once, let me move that for some easy gains.

              Real world stuff is a little more complex. Right now I’ve got (AGAIN ON BACKBURNER DARNIT) an issue where years of laziness (“Just call the event for that control, that’ll sort out items A-Z correctly) has led to occasionally deeply nested event calls that are totally unnecessary.

              Again, a lot of it comes down to experience. Degrees tend to teach you foundational computer science stuff, or really narrowly tailored coding stuff. Both turf you into software engineering with different biases that takes experience and mentorship to grind off.

              (And then there’s the “self-taught” coders — I’ve seen absolute brilliance marred by a blindspots the size of a T-rex. Also, never let an engineer do anything with a database. They WILL turn it into a spreadsheet. A horribly, horribly inefficient spreadsheet. With a primary key that’s just a row number….)Report

              • Oscar Gordon in reply to JS says:

                My current rats nest was done by a self taught engineer who started in C code, then took his bad habits to C++ & Java. You read both his C++ & his Java and you can tell he learned all his bad habits in C. I get into discussions with my leadership about what we are doing with that tool (because it’s a very effing useful tool and 3 of our biggest customers love it, but we can not keep supporting it as is), and they constantly get hung up on trying to understand the code. I have to pull them back and remind them that there is no understanding the code, there is only understanding what the code was trying to do, and replicating that (we know how the algorithms work, we have that documentation – it’s just his implementation of those algorithms that is a nightmare). They keep thinking there is something useful in the code itself – sunk cost fallacy? There isn’t anything useful in there except stark lessons in what not to do.Report

              • JS in reply to Oscar Gordon says:

                One of the projects I support, which I have lobbied EXTENSIVELY to nuke from the ground up and called “XXXX 2”, was sloppily converted from C to C++ by someone who almost understood object oriented programming.

                That “almost” resulted in some of the most brittle, badly written monstrosity I have ever encountered. Any critical bug I fix in that results in a dozen more bugs being fixed in JUST that subsystem, as it really only works along the pathways it’s primary user used.

                What kills me — KILLS ME — is that the one bit that’s not object oriented is the internal data, which is done in multiple arrays which reference and depend on themselves in weird ways (clearly cut and pasted from the initial, pre-conversion code).

                That internal data? it is the most clean and clear example of “This should be an object” because ALL this software does is load data from offline sources, and manipulate and display it in various ways. And it’s not.

                it’s the old, memory-constrained nest of pointers to pointers to pointers in arrays where, god help me, I literally can’t remove or add items to some of the arrays without breaking EVERYTHING.

                Worse yet? At least half the program is functions and abilities to handle data types and manipulations that were common 20+ years ago, but no longer used.

                A modern version of this software would be simpler, more elegant, and have at most a third the functionality.

                And bluntly I could slap it together in C# in maybe two months, needing only a poll of it’s few users to find out what features they actually use.

                Add two weeks if I slap it together in C++ using the current GUI library (which is fantastic but offers no WYSIWYG editor which would vastly accelerate things)Report

              • Oscar Gordon in reply to JS says:

                Ain’t Technical Debt wonderful.Report

              • As we used to say about some, “They can write bad Fortran in any language.”

                As a grad student I inherited 10,000 lines of Fortran developed on an IBM machine prior to the adoption of Fortran 77, and was paid to port it to a CDC machine. It was absolutely horrible Fortran.Report

              • Michael Cain in reply to JS says:

                Bless the people who do database design well, so that I don’t have to do it badly. Which I do.Report

        • veronica d in reply to JS says:

          The application I work on is quite interesting. I’ve mentioned before, it is about 600k lines of Common Lisp and about 400k lines of C++. It is hyper optimized with massive amount of bit twiddling. We put nothing in an integer that can’t be packaged into a bit map. We avoid pointers and mallocs as much as we can. The redeeming grace is that Lisp lets us build macros for all of this, so in a sense we’re extending our “compiler” to let us write hyper optimized code.

          This is all necessary — at least it was when the code was first written. The problem space is massive. The room for high level optimization is quite limited, as airlines did not design their pricing rules for ease of processing. We use as much data sharing and memoization as we practically can, along with a fair bit of tree pruning (with the ensuing loss of search space), but in the end our application lives or dies by its ability to process an exponentially growing mass of data as fast as it possibly can.

          In practice, our design resembles the entity/component scheme used in video game engines: https://en.wikipedia.org/wiki/Entity_component_system

          This is decidedly _not_ object oriented, but instead it focuses on data locality and cache performance, even at the cost of “nice clean code” schemes from the OO/FP world. For video games, this makes a visible difference. The more polygons you can pump out per second, the better the game feel you will get — at least in general. It is similar for our application.

          “10% slower is fine” is definitely not true for vidya, nor would it work for our application. For us, a 10% speedup means more and better solutions provided to our customers — and in reality we’re looking at more like a 50% speedup across the board. After all, if you are processing 100k entities, getting a high rate of cache hits versus cache misses is an enormous speedup. It’s very non-trivial.Report

          • I have always found your descriptions of the optimization problems fascinating, and except for (a) a spouse with progressive dementia, (b) granddaughters, and (c) depending on the ability to work from where I am and never have to fly, I’d probably be applying (whether you have current openings or not).Report

      • Are they teaching heavy parallelization these days? Clearly, that’s where performance increases are going to have to come from.Report

        • Oscar Gordon in reply to Michael Cain says:

          Not early on, they aren’t. I’ve got one more dev class before this short program is done, so maybe we will touch on it.

          But I agree, parallel processing is critical. Sadly, I’m not sure how early the concept of multi-threading is getting touched on, outside of managing GUI/System interactions. Moving into parallel processing, data analysis, and basic AI (Neural Nets, etc.) should probably start happening sooner.Report

    • Michael Cain in reply to Jaybird says:

      Yeah, FinFET and other non-planar components let them keep going past 2012. More interesting is to watch manufacturers drop away. Only Intel, Samsung, and TSMC continue to push for smaller sizes. GlobalFoundries, one of the DOD’s “trusted” suppliers, was blunt about it, saying that they couldn’t afford the cost of R&D for fabs below 10 nm.

      It’s more fun to watch how parallel processing is evolving for consumer devices. Mixes of both slow low-power cores and fast high-power cores. In addition to GPUs, there are starting to be specialized cores that run certain AI models efficiently. Apple’s shiny new M1 Ultra (technically two chips plus a pile of very high speed smart interconnect) has 20 CPU cores, 64 GPU cores, 32 neural net cores, dedicated video and image processing, plus up to 128 GB of unified memory allocated to processors as necessary. 114 billion transistors.Report

    • Mike Schilling in reply to Jaybird says:

      The worst, biggest, sloppiest code is always the UI. We need to get users to appreciate the clean, simple logic of the command line.Report