Maybe Barbie was a Scientist

Jonathan McLeod

Jonathan McLeod is a writer living in Ottawa, Ontario. (That means Canada.) He spends too much time following local politics and writing about zoning issues. Follow him on Twitter.

Related Post Roulette

99 Responses

  1. Patrick says:

    I understand statistics just fine. But I have to admit, if I was doing statistic-heavy research, I’d hire a stats nerd to do that work for me… and my undergraduate degree was in mathematics.

    To the extent that you’re taking advantage of expertise, this is probably a really good idea. To the extent that you’re thinking you don’t even need to understand the mathematical concepts behind whatever you’re doing, this is a really bad idea.Report

    • You were the person I thought would most appreciate this article. It’s nice you’re also the first commenter.

      I do wonder if he wrote it the way he did just to try to get some people’s attention to the more nuanced position.Report

      • Patrick in reply to Jonathan McLeod says:

        Well, not to put too blunt of a point on it, but academics who win prestigious awards have a tendency towards stark dichotomies. They can be bombasts, or paradigm warriors, or whatever… but generally they find a place where you can reducto yourself into horrible positions, pick a spot, and shout loudly.

        Then camps form, and everybody participates in the reducto musical chairs conversation, and the original shouter enjoys the ensuing faculty lunch.

        I do think that these sorts of conversations are important to have, and it’s not bad to have them frequently nor is it bad to have people shouting about the drawbacks of anybody’s position.

        Me, I stand on this ground: if you need to know A to do B, then know A. If you don’t need to know A to do B, know enough to know when the B changes to B’ and you need to know the A. Then hire somebody to do A, or learn it yourself.

        The tricky part there is that it’s easy for B to change to B’ without you paying attention. Collaboration with people you don’t particularly like helps on that score. It’s the one sort of standing advantage that peer-review has that makes it the least-worst option in publishing 🙂Report

    • Chris in reply to Patrick says:

      In psychology, this is a real problem because when you get beyond t-tests and ANOVA (hell, beyond t-tests), most researchers are either uncomfortable with the statistical tools because they don’t understand the math, or they use them improperly (see, e.g., multiple regression) because they don’t understand them. The result is very real and problematic limits on research design because the designs have to be tailored to a small family of statistical tests.Report

  2. Kimmi says:

    Oi. I’ve heard (and rather like) the idea that math needs a radical reformation.
    And physics isn’t taught the way people actually do it in the field…

    But, man, you could have stated that better!Report

    • Jonathan McLeod in reply to Kimmi says:

      I agree. I hate that math is taught in such a way that what so many people learn from math class is that they don’t like math. Things really need to change.

      Ironically/funnily/sadly enough, Ontario recently changed the way it teaches math. I’m not too familiar with the change, but apparently a lot of parents were upset because they could no longer help their kids with their homework. Schools even set up tutoring sessions for parents. Still, parents were REALLY UPSET.

      Of course, most of the people I knew who complained about the changes weren’t very good at it, anyway. So I had no idea why they wanted their kids to have the same crappy experience.Report

      • Kimmi in reply to Jonathan McLeod says:

        Mathematics is taught, and has often enough at the heart of it, the assumption that “stuff” is continuous (take calculus for instance). Physics says that ain’t so.

        And we wonder why quantum mechanics has such absolutely horrid math to learn? It’s not physics’ fault…Report

        • Mike Schilling in reply to Kimmi says:

          Mathematics is taught, and has often enough at the heart of it, the assumption that “stuff” is continuous (take calculus for instance). Physics says that ain’t so.

          And if you need to analyze the path of a projectile using quantum mechanics, drop that assumption. The 99.9999% of the time you want to treat it as a solid object and figure out where it’s going, use calculus.Report

          • Kimmi in reply to Mike Schilling says:

            99.9% of the time, you’re into fluid flow, for whatever that’s worth. Not that I’ve studied it greatly, but I don’t think you generally turn to calculus for that.Report

            • Fnord in reply to Kimmi says:

              People treat fluid flow as a continuum all the time. Not a rigid continuum, obviously. But a continuous substance. And when they don’t, it’s usually (though admittedly not always) a discrete elements model that’s used because it’s computationally tractable.Report

      • Glyph in reply to Jonathan McLeod says:

        I sincerely hope schools will do better by my kid than they did by me. I read somewhere that it been found that stress – caused by struggling with the mathematical concepts – interferes heavily with the exact same regions of the brain that are required to process the mathematics.

        So kids that are struggling – even a bit – and get stressed about it, can get caught in sort of a vicious feedback loop in which they can’t understand, because they can’t understand. The brain regions needed for comprehension and computation essentially start to panic and shut down at the very sight of anything that looks like equations.

        At that point, not even rote repetition or patient, alternate explanation can make any headway (which otherwise would seem like reasonable palliatives) – the brain won’t (can’t) listen; it cannot do the work, because the processing power is not available to it as long as it is stressed (and that stress become emotionally-associated to mathematics, so…)

        And mathematics being cumulative, with each new concept building on the last, it is thus very easy to fall far, far behind very, very quickly.

        This was my experience for many years, right down to the exact feeling of “panic/brain shutdown” (and no, willpower alone can’t correct for this, any more than it can shut off a panic attack); I am still extremely weak in mathematics, and many concepts which I do *now* understand, came to me many, many years after the fact (often in an offhand, “Eureka!” kind of way, when I happened to be considering something else – “OOOOHHHHH, so THAT’S how that works!”)Report

    • LeeEsq in reply to Kimmi says:

      When my mom was in high school, she had to memorize the entire periodic table for tests in chemistry. Decades latter, my brother and I attended the same high school. They stopped making students memorize the entire periodic table on the grounds that if real chemists just look up the information when they need to than why should we inflict the pain of memorization on high school students who probably won’t become chemists.

      This wasn’t necessarily a cop out to make things easier. If you give access to things like the periodic table you can actually make tests harder because students don’t have to worry about memorizing it.Report

      • Kimmi in reply to LeeEsq says:

        If one can’t reel off the nobel gasses, or the halogens, or the alkalis (okay, maybe not all of them), then one is probably missing something out of Chemistry.

        If nothing else, the ability to read labels quickly at the market.Report

      • Jim Heffman in reply to LeeEsq says:

        The thing is, though, that “memorize the Periodic Table” is something that even dumb students can do.

        If you take that away from teaching, then suddenly you have to teach dumb students to do smart-student things.Report

      • George Turner in reply to LeeEsq says:

        I once memorized the periodic table and could draw it out on a sheet of notebook paper. Then I found that the skill’s usefulness ranked somewhere below juggling. I probably have to pull up a periodic chart slightly less often than I otherwise would, but to this day no one has ever asked me where vanadium or polonium is.Report

      • This wasn’t necessarily a cop out to make things easier. If you give access to things like the periodic table you can actually make tests harder because students don’t have to worry about memorizing it.

        That’s true. When I was an undergrad, I used to get really upset when I studied for a test, and then on the day of the test, the class “voted” to make the test a take home. They just didn’t seem to realize that the standards just got harder.Report

        • Patrick in reply to Pierre Corneille says:

          Here’s what you do.

          “Okay, class, I realize that some of you might not be prepared for this exam, so everybody who feels like they’d rather do this as a take home exam, raise your hand. Okay, form a line right here, everybody else stay in your seat. Here’s a copy of the exam, excellent, good luck, I expect you to do all the work yourselves, but you’re welcome to use any outside resources other than each other or just posting the question to Bugtraq, good luck, see you all next week.

          Okay, the rest of you… you ready? You all can take this as a free iteration of your average test score and go home, or take the in-class exam now.”

          The people who are left will be those who really needed, or wanted, to perform really well on the exam.Report

          • God(dess) bless you! To be honest, the fact that take-homes were harder wasn’t my biggest beef (even though I said otherwise in my comment.)

            What used to really frustrate me as an undergrad was that I worked (not full time, just 20 to 30 hours per week) and I had to be very strict about budgeting my time. And when the professor gave us the “gift” of a take-home exam, that meant I had lost about one or two hours I hadn’t expected to lose. Of course, life’s not fair, etc., but it still irked (and irks) me.Report

  3. Morat20 says:

    I yanked in a stats guy on my committee (he actually taught a lot of comp-sci classes, mostly on the hows and whys of doing computational math and the algorithms therein, but his doctorate was math and his love was statistics) for my Master’s thesis exactly because I suck at math. (Relatively speaking).

    I had a giant mass of data that I did very simple statistical analysis on, and I picked this guy entirely because I was shaky on the conclusions I was drawing and the validity of the — again, really simple undergraduate level stuff — I was doing.

    A chunk of that was it having been almost a decade since I’d done any math more complicated than a checkbook or a few proofs of algorithmic complexity (Basically butt-covering “It will not eat the processor and end the world, worst-case).

    But a lot of it was — I was a low B student in any hard-math course in college. Sure, I had something like half the required mathematics courses for a math BS by the time I graduated, but I couldn’t say I understood it like I did CS stuff.

    My favorite go-to example was Laplace Transforms. I learned that in..Cal II? Cal III? Never understood it. Could fake it on a test, promptly forgot it. Never had a clue how to apply it.

    Then I took a digital circuits class and you used Laplace Transforms to turn a circuit with a capacitor or an inductor into something simple where you could work out voltages, then transformed it back to put the time-element back in. THEN it made sense. Intuitive “I can use this in the real world” sense.

    I just don’t handle totally abstract well. I’ve got to have a solid problem I understand and then use the math to solve it before I really understand the math.Report

    • Patrick in reply to Morat20 says:

      Laplace Transforms were taught in Differential Equations, at my alma mater, which was basically Calculus IV.

      That’s when I really stopped caring about very high level applied math (integration by parts planted the seed, DiffEQ brought that plant to full growth).

      “Wait, let me get this straight. Any really interesting problem like this would be insoluble using the methods that you’re teaching us now, and the really interesting problems are uncountably infinite more than the interesting ones? So what you’re saying is that this is basically ‘stupid mathematician tricks’ on steroids and by the time I’m 40 we’ll be able to program computers to do this work for us? This is the log table equivalent for my generation? Thanks, I’ll stick to the theory stuff.”Report

      • Morat20 in reply to Patrick says:

        Ah, Diffy-Q. I got like…nothing out of that class. Well, a barely passing grade. I got more out of linear algebra than that.

        Ironically, I do know the methods used to solve differential equations using a computer (the ones done under the hood, not the ones that solve it symbolically like you would on paper) — those are quite interesting.

        I can still barely tell what a differential equation is used for. I vaguely recall pipes being filled with water. An engineer friend of mine does assure me that he uses them.

        Now Calculus I remember, because I played with the basics (I and II level stuff — single and double integration and derivatives) in physics, and you sorta grasp the concepts behind simple derivations and integrations just by doing some simple mechanics problems.

        You know, moving cars or cannonball shots and such are concrete enough that I could grasp and recall the abstract math and how it worked.Report

      • Mad Rocket Scientist in reply to Patrick says:

        Laplace transforms were taught in Calc 3 for me, but became interesting in Control Theory (as did Fourier transforms)Report

        • Morat20 in reply to Mad Rocket Scientist says:

          I’m not knocking advanced math. I’m just saying that I took a LOT of advanced math, it taught very abstractly (very rarely were we given concrete, real-world examples — I could, at least as long as I retained it, solve a differential equation. I wouldn’t realize when I’d need to after that. Even if I did need to) and the only stuff I truly “got” was the stuff I actually used.

          I got Cal I because I took Physics Mechanics. I got most of Cal II due to electromagnatism. Cal III? Not so much. Laplace transforms because of a circuits class, but little else out of Diffy-Q. Some linear algebra, but not much.

          Boolean logic and discrete math? Lots of that. I use it.

          I just don’t know if the abstract way we teach higher math really works for everyone. It didn’t for me, but then I managed to get through it anyways. Was that good or bad?Report

          • George Turner in reply to Morat20 says:

            It makes me wonder if there’s a disconnect between mathematical education (college level courses, not primary-school arithmetic) and that in other fields. Could you imagine a language class where they just wrote the conjugations on the board as formulas and then moved on to other more advanced topics with the assumption that the formulas somehow conveyed enough understanding for the students to start chatting away in the real world?

            Perhaps the difference is due to the attitude that few people need to know advanced math, and few are thought to capable of handling it, so why go to the trouble of finding ways to teach it to everyone? On the other hand, few people need to know Latin, and although we can teach Latin to everyone, what horrible thing did they do to deserve it?Report

            • Morat20 in reply to George Turner says:

              Well, I was learning Calculus and Physics simultaneously — which is why I ended up switching from Physics to Computer Science. Physics was using math that had just been introduced that week (if I was lucky — sometimes it lagged behind) and I couldn’t keep up with the math and the physics.

              Should I have learned the math a semester ahead, and learned to use it later? A semester isn’t that bad a jump — you don’t forget that much. But trying to use math concepts I’d just been handed Monday (and hadn’t even touched the homework on) on Wednesday killed me.Report

              • Kimmi in reply to Morat20 says:

                We had worse stuff in college. Where the math was out of sync with the physics, to the point where a good third of the juniors were repeating the class, and the sophomores were advised to drop it if they weren’t in diffyeq yet.Report

              • Morat20 in reply to Kimmi says:

                Personally, I think it was caused by trying to fit all the classes into a “4 year degree”. You really couldn’t — not for physics. (Probably not for a lot of sciences).

                You needed an year, unless you came in with college credit in a lot of math and science classes already.

                Which I did — I retook Physics: mechanics because my AP scores weren’t high enough (I got credit, but didn’t feel I’d really grasped the material well enough — had a 4). I started on Cal II, but second semester when i was doing electromagnetism…I don’t remember what I took first semester.

                Basically, that “four year plan” for classes was and is bunk. Classes don’t make, classes conflict, and absolute best case for a lot of degrees you’re having to learn concepts and apply them out of order in different classes. Better to plan for five years and take the math you need the semester before.Report

      • Brandon Berg in reply to Patrick says:

        I never really got integral calculus until I took electromagnetism in college. I could solve integrals, but I didn’t really understand what they meant geometrically (other than simple stuff like area under a curve), and consequently couldn’t set up integrals from word problems.

        Nevertheless I got a 5 on the Calculus BC test even though my class only covered the AB material. My multivariate calculus professor always described stuff as being “so easy a trained goose could do it.” Which was technically correct, but also indicative of everything that’s wrong with the way calculus is taught. If you can’t set up an integral from a word problem, you can’t do integral calculus.Report

        • Morat20 in reply to Brandon Berg says:

          I got that in mechanics, because my teacher (and this was in high school) did the derivations from distance down to acceleration.

          So you could see how the derivatives and integrations worked, as products of acceleration, velocity, and distance.

          So the “slope of the tangent line” bit (the derivative) of the velocity function? Was your acceleration at any point. The “area under the curve” of velocity? The distance travelled (the integral of the velocity function was the distance function).Report

        • George Turner in reply to Brandon Berg says:

          Well, I got an A in calc I but really didn’t fundamentally understand it until I coded software routines to find integrals and derivatives. As I recall one problem was finding the mass of water in an odd shaped submarine ballast tank tilted at an arbitrary angle. The easiest way to do that was just to determine if a point at x,y, and z was inside or outside the tank, and if inside whether it was above or below the water level (as determined by a sensor at some particular point in the tank).

          Perhaps teach calculus after a few programming courses and then teach the clever concepts as refinements to the student’s area, volume, or slope algorithms. The course might have to be stretched out, but I think students would both retain more and gain a better insight into what they’re doing, and why. That approach could feed right into numerical analysis, optimization, and some other concepts that would be just as useful as calculus to many people in modern jobs, if not more so.Report

          • Rod Engelsman in reply to George Turner says:

            Yeah, I can just see that sort of problem in my head. They teach you how to do it analytically in theory but that assumes that you can define functions that describe the surfaces and then that that function is tractable to integration. And then tilted at an arbitrary angle? Screw that noise; numerical integration here we come!Report

  4. KatherineMW says:

    mathematically merely “semiliterate,”

    Shouldn’t the term be “seminumerate”?

    Anyway, not all branches of the sciences require complex math.Report

  5. Mad Rocket Scientist says:

    The problem with learning mathematics is (in my experience) that the curriculum is written by mathematicians, for mathematicians, with the subtle attitude of “If you can’t learn it this way, like the rest of us brilliant mathematicians did, why are you even bothering with this class”.Report

    • Glyph in reply to Mad Rocket Scientist says:

      I agree – see my comment above.

      But we didn’t know then, what we know now – so back then, it must have seemed like you either “get” mathematics or you don’t, innately.

      While as with all things native intelligence undoubtedly plays a role, the stress/shutdown paradigm seems to have explanatory power (to me anyway) that indicates there are lots and lots of people that – even if they will never be prodigies – could be much, much stronger mathematically if only we knew how to handle the education piece better.

      How many people had one stressful year of math in second grade, learning times tables or whatever, and got screwed for life due to the residual emotional association of that?Report

      • Jonathan McLeod in reply to Glyph says:

        I’m also skeptical that a majority of math teachers really “get” math. Sure, they can do it, but that’s not necessarily the important part.Report

        • Patrick in reply to Jonathan McLeod says:

          The people who really get math usually act weird. I stopped at undergrad because I was terrified I’d wind up being the sort of dude who walked around campus in slippers and a tweed coat muttering to myself about n-connected topological spaces or something.Report

        • Mad Rocket Scientist in reply to Jonathan McLeod says:

          There is a difference between Math teachers & mathematicians, in that one barely understands math, & the other has no clue how to teach.Report

          • Kimmi in reply to Mad Rocket Scientist says:

            mm… yes. “What is this thing with brackets that you didn’t bother to mention in class, sir? I don’t understand what it is, let alone how to solve the homework problems”
            (teacher had been using sigma notation…and only sigma notation)Report

          • MikeSchilling in reply to Mad Rocket Scientist says:

            Actually, some mathematicians are excellent teachers. I had John L. Kelley [1] for the upper division measure theory class, and he was awesome.

            1. A very famous topologist.Report

            • Mad Rocket Scientist in reply to MikeSchilling says:

              My 2 favorite math teachers:

              College Algebra – guy taught at a local tech school (I was building transfer credits) & held a Master’s in Economics, but was the first math teacher to open my eyes to how to solve Algebra problems (I learn math graphically, equations make no sense to me until I plot them, which he did, for everything – & which no teacher before had ever done).

              Come to think of it, the next two teachers at that tech college were also math geeks, but not mathematicians, and were awesome teachers!

              Linear Algebra – Guy was a visiting professor from Switzerland & despite a very thick French accent, was very patient and took the time to make sure everyone understood what he was telling us. I was nervous when I started that class, but by the end I really knew the material (& knew that I never wanted to work anything beyond a 4X4 matrix by hand).Report

      • Rod Engelsman in reply to Glyph says:

        I knew someone who simply couldn’t “get” algebra because she couldn’t get past the idea of adding (or multiplying or whatever) together a number and a letter. The thing of it was, she could work through fairly complicated logic problems in puzzle books, so she wasn’t dumb, she just couldn’t get past that mental block. At least until I eventually got her to see how it was just another kind of logic problem.

        Personally, I always had an easy time with math. When I was a kid I had a hard time understanding how someone couldn’t get math; it was just so natural for me. Until I ran into something toward the end of Series and DiffEq* in college, namely the series expansion business. It still boggles my mind that something like that could even work and who in the hell would even think of trying anyway? (Okay, apparently some dude named Taylor.)Report

        • MikeSchilling in reply to Rod Engelsman says:

          A Taylor series is just a sequence of successively better approximations. If you look at, say, sin(x) near 0,

          x is a pretty bad approximation (unless you’re *very* close to 0)
          x – x^3/3! is better
          x – x^3/3! + x^5/5! is even still
          x – x^3/3! + x^5/5! – x^7/7! is better still

          Nothing hard there; we’re just applying more and more corrections. And it happens that the limit of taking more and more corrections converges precisely to sin(x).Report

          • Rod Engelsman in reply to MikeSchilling says:

            Oh, I get how infinite series work and how they converge to the value. What I have (or had, I haven’t seriously looked at the things in over thirty years) a hard time with is how you would come up with that particular pattern to converge to that particular function. I mean… what the hell is the connection between [sin(x)] and [((-1)^n)(x*n)/n!]? Is it related to the way you could converge on pi?

            Anyway, I like the way it was presented as “If you run into an integral that can’t be determined analytically then you can construct the Taylor series expansion to calculate the result to any desired precision.” Uh…. thanks. I guess.Report

            • MikeSchilling in reply to Rod Engelsman says:

              You know what the first derivative is, right? It’s a slope. You can approximate a function f(x) at some point a using the line through (a, f(a)) with the right slope. That’s

              T1(x) = f(a) + f'(a)(x-a)

              In case it’s not clear, a, f(a) and f'(a) are values, and x is the only variable in that equation.

              Notich that those are the first two terms of the Taylor series. It’s an OK approximation when x is very close to a. If we do some algebra that I’m not going to try to recall at 11:00 at night, we get a beter approximation that includes the second derivative:

              T2(x) = f(a) + f'(a)(x-a) + f”'(a)(x-a)^2/2!

              Instead of deriving this, I’ll hand-wave a bit, and say that since the second derivative gives us the rate of change in the first derivative, it also gives us an approximation to |f(x) – T1(x)|, since the reason that T1 is only an approximation to f is that T1’s first derivative is a constant. (If f'(x) were a constant, f would be a straight line, and T1 would be equal to f.)

              Now we have a better approximation, but it’s still imperfect unless f is a parabola, so we correct it further using the change in the second derivative, AKA the third derivative. We can keep improving our approximation the same way, and the algebra will keep giving us the next term in the Taylor series, which will be f’N(a)(x-a)^N/N! .Report

              • Trumwill in reply to MikeSchilling says:

                You know what the first derivative is, right? It’s a slope.

                But is it a slippery one? Some of us worry about slippery ones.Report

              • Mad Rocket Scientist in reply to MikeSchilling says:

                f’N(a)(x-a)^N/N! – Kinda looks like a mathematically correct curse word when written without LATEX or something similar.

                BTW, good effort on the explanation, it’s very clear. I have to admit I think my understanding of series just improved a bit, so thank you.Report

              • Rod Engelsman in reply to MikeSchilling says:

                [I walk out of my house on some sort of mission. It’s an important mission, but quickly forgotten as a car pulls up directly in front of me on the street. A small car, a compact hatchback from the late ’70s, being driven by a dog. It’s a nice dog, one of those gangly sort of hounds that just lives to fetch balls in the park. He (definitely a “he”, don’t ask me how I know) stomps on the brakes, puts it in park and moves over into the passenger seat, entreating me with those soft eyes, “Hey, dude! You drive!” My wife emerges from the house and from somewhere we determine the address of the owner. It’s unfamiliar to me but my wife says she knows where it is and in the manner that can happen only in dreams and Warner Bros. cartoons magically produces a map. We’re now on a new quest; return the dog and car to their owner. We pile in and…] beep beep!… beep! beep!… beep! beep!

                My alarm is sounding. I’m in a truckstop on CA-86 somewhere between Calexico and Pasadena. Time to roll. Get dressed enough to pop out and pee on a tire, then compulsively check my email on my phone (yeah, I’m one of those). “New Comment ….” Open it. Mike S. has deigned to try to explain Taylor fucking expansions to me. Oh, G-d! Coffee… that sweet elixir of life itself, I need coffee before I can even begin to make sense of this shit.

                Nope. Still doesn’t help much. It’s like I understand all the symbols and I had enough calc to know what the hell a differential is, but comprehension still eludes me. I ponder it on and off all day while I’m driving toward Salt Lake City, finally deciding that I just really need a picture of what’s going on. I’m a visual thinker and if I can’t visualize it I can’t really understand it.

                Wolfram’s mathworld site, my goto for this kind of thing… no help there. No pictures. Dang. Wikipedia… bingo! Graphs! Oh… it would have helped to say that a=0 and the derivatives are evaluated at that point. Reading farther… Eureka! Homer doh headslap. You’re really just constructing a polynomial curve that approximates the function. The more terms you add, the closer you get to matching the curve. Still a bit vague on how and why the derivatives give you the coefficients, but I’m closer at least.

                And the what does the dream have to do with this? Aside from just being terminally weird, maybe nothing. Or maybe it’s a kind of projection. Maybe it’s like I’m the dog and you’re the driver. Anyway, thanks, Mike.Report

              • Rod Engelsman in reply to Rod Engelsman says:

                Darn. Forgot to close the html.Report

              • Kimmi in reply to Rod Engelsman says:

                yeah, much of math is far more fun with pictures than stupid equations.Report

              • Rod Engelsman in reply to Rod Engelsman says:

                Still a bit vague on how and why the derivatives give you the coefficients, but I’m closer at least.

                Okay. I got it now. Because if the function actually is a polynomial then the coefficients (multiplied by the factorial of n) are the actual values of the derivatives at a=0. So the Taylor series of a polynomial function is the actual function itself. Which makes sense.

                Thirty years later and I finally understand this. In all fairness they barely touched on this topic in the course way back when. Dunno why.Report

              • Kimmi in reply to Rod Engelsman says:

                this! this is what irritates me about math/physics!
                The most useful stuff, and they spend minutes on it!

                They spend hours teaching you “the slick” way of doing the integral exactly. But, dude, that’s not the useful part!
                😉Report

              • Rod Engelsman in reply to Kimmi says:

                Well, I’m willing to give my professors a pass on that score. I took that class back around 1980 or so. Climb into the way-back machine with Sherman and Mr. Peabody for a bit of perspective…

                I used a slide-rule–yes, a slipstick–back in high school until I was finally able to score a TI-35 scientific calculator my senior year (’78), maybe. It was a new and nerd-hot and I was in high clover. Back then and even into college it was common for math textbooks to have tables of log and trig functions in the Appendix. I used that in college until I whined my folks into springing for a HP-41C programmable calculator at some point. Again, I was living the Sheldon dream; about 1K of memory plus the extra I installed with these little plug in cards about the same size as an SD card. Had another one for additional math functions.

                Nobody had PC’s because they didn’t really exist yet. Some years later, like mid-80’s my bro-in-law had a “portable” computer that was about the size of a suitcase. Bleeding edge stuff with about a five inch screen and it played a first-gen version of Flight Simulator in MS-Dos.

                They were well aware that only a very limited class of problems were analytically tractable but the numerical methods weren’t available except to some universities and large corporations on main-frames. Our school had one, but I was writing and running batch programs with Fortran and PL-1 on punch cards. Woe be unto you if you dropped your deck of cards. That what line numbers were for, so you could put the deck back together.

                Anyway, what you say makes a lot more sense in 2013 than it would have in 1980. I lived through the transition. Hell, I’ve likely got more computing power on my phone than the university had back then with their big, honkin’ machine.Report

              • MikeSchilling in reply to Rod Engelsman says:

                Okay. I got it now. Because if the function actually is a polynomial then the coefficients (multiplied by the factorial of n) are the actual values of the derivatives at a=0. So the Taylor series of a polynomial function is the actual function itself.

                And you can take that a bit further, If f isn’t a polynomial, the Taylor polynomial of order N is the Nth-degree polynomial that has the most derivatives in common with f, which means that

                Term 0 (f(a)): It has the right value
                Term 1 (f'(a)(x-a)): It has the right slope
                Term 2 (f”'(a)(x-a)^2/2): It has the right amount of concave upness or concave downness
                etc.

                After term 2, it’s harder to describe what the nth derivative does geometrically, but it remains true that if f and g have N derivatives in common, they’re good fits for each other, and the bigger the N, the better the fit.Report

            • MikeSchilling in reply to Rod Engelsman says:

              If you run into an integral that can’t be determined analytically then you can construct the Taylor series expansion to calculate the result to any desired precision.”

              In other words, if you can’t find an exact formula, here’s a way to generate better and better approximations.Report

        • Mad Rocket Scientist in reply to Rod Engelsman says:

          Oooooh, infinite series! I got the abstract concept, but always struggled how to work them analytically (probably why I went toward numerics, just approximate to 8 decimal places and have a beer)Report

      • Rod Engelsman in reply to Glyph says:

        Arggh. The * was meant to point to a post-script where I noted that we called the course “Serious and Difficult Equations” at the time.Report

  6. Maribou says:

    You have reminded me of an undergraduate class I took which basically consisted of reading papers (and these were reputable papers, published in Nature, Science, and the like) and then critiquing them in small groups (and then turning in an analysis, and explaining our findings to the rest of the class). Usually they were easy to shred; once in a while they were brilliant and perfect. Had to keep us on our toes.

    Anyway, within about 4 classes, we had learned to ALWAYS ALWAYS ALWAYS start by redoing the math for ourselves, because it was so very often wrong or fudged. “THIS ISN’T STATISTICALLY SIGNIFICANT!” we would cry, and “CORRELATION DOES NOT EQUAL CAUSATION,” and “CAN’T YOU FUCKING ADD?” Young wolves, slavering at our elders’ heels.

    That was a fun class.Report

  7. Rod Engelsman says:

    There’s something to what Wilson said. Einstein first conceived of relativity by visualizing problems in his head first and then working out the math later. My understanding is that he wasn’t really what we would call a top-notch mathematician compared to others in the field.

    I also get a monthly newsletter of articles called “Real-World Economics Review” that started out as a protest group of economics students bemoaning the over-mathematization of economics. It was originally called the Post-Autistic Economics Review.Report

    • MikeSchilling in reply to Rod Engelsman says:

      There’s nothing unusual about using intuition to solve a problem and verifying it rigorously later. A lot of learning math is training your intuition in different ways to make that possible.

      Take infinity, for instance, I think that most people, when they’re introduced to Cantor’s theory of transfinite types, which shows that there is a hierarchy of infinities [1], immediately dig their heels in. Their intuition says “How can there be bigger and littler infinities? Infinity is so big, you can’t count it! How can anything be bigger than that?” I felt the same way when I first read about it. It took a fair amount of study, rumination, and working through examples to grasp the ideas, and more to absorb them to where they become intuitive. But now it makes perfect sense, and the initial objections seem naive.

      1. How many infinities are there? Pick an infinity. Nope, more than that.Report

      • Patrick in reply to MikeSchilling says:

        Power Set the mofo, BOOMReport

      • Rod Engelsman in reply to MikeSchilling says:

        Yeah. I think what Wilson may have been trying to get at was that the person making the intuitive leaps doesn’t necessarily have to be the same person who rigorously verifies it mathematically later. For instance, a team where I think up shit and you do the math could be very productive (well… maybe not that particular combination, but you get the idea).

        And I dig the transfinite. Cool stuff.Report

        • MikeSchilling in reply to Rod Engelsman says:

          Sure. But either the ideas guy is a natural untutored genius, or he’s trained his intuition to think up the right kind of shit. I’m not sure I believe in the former.Report

          • Rod Engelsman in reply to MikeSchilling says:

            It depends a lot on which field of science he’s in, right? I mean, if you want to be a theoretical physicist playing with string theory in 11 dimensions… well, yeah, you can’t do that without being a first-rate mathematician. But in just about any other field the conceptual stuff probably isn’t much worse than calc or diffEq, perhaps linear programming. The life sciences are going to be heavy on the stats for the experimental work but I doubt that the conceptual stuff is inherently heavy in the math.

            Even if I was a scientist doing stats-heavy work and considered myself to have a fair grasp of that, I would still value having a real statistician check it over for me. Saves embarrassment.Report

            • MikeSchilling in reply to Rod Engelsman says:

              Here’s an example, a theory I’ve seen about why some level of homosexuality may actually be selected for. Obviously, being gay makes it less likely that you’re going to pass your genes on directly [1], but that’s only one path to evolutionary fitness; another is to protect the lives of people closely related to you. So, hypothesize that at some point in the distant past it was advantageous to have a certain number of extra adults to protect the children of the tribe without straining scarce resources by creating their own.

              I suspect that someone with a strong background in evolutionary biology would know right away whether this was plausible or not. If it was, at that point they could engage someone to see if the numbers made sense, and if not, they wouldn’t bother. (It seems like nonsense to me, because given the level of infant and child mortality there’d already be plenty of childless people, but what do I know?)

              1. As is often pointed out by people who, the rest of the time, disdain evolution. Odd, that.Report

              • Rod Engelsman in reply to MikeSchilling says:

                Yeah, that’s the sort of thing I was talking about.

                Tangentially, another theory I’ve heard that makes sense is that certain versions of a gene(s) on the X-chromosome increase a woman’s fertility but also have the side effect of creating a tendency for males with that allele to be gay. So fertile sisters, gay brothers. Virile males, not so fertile sisters. Sets up a tension that results in a fairly predictable fraction of gay males in a population.Report

              • Kimmi in reply to Rod Engelsman says:

                Rod,
                Research I’ve heard about is that male homosexuality is more about brainspace, and less about physical stuff. That male homosexuals have a “female brain” (Yes, there’s plenty of definition that I could spool off here).Report

              • Mike Schilling in reply to Kimmi says:

                Hmmm. Do gay men not like the Three Stooges?Report

              • Brandon Berg in reply to MikeSchilling says:

                It’s probably worth pointing outh that a trait doesn’t have to be genetic to be biological. Gene expression is heavily influenced by environment, and while twin studies do suggest some genetic component, environment appears to do most of the heavy lifting.

                Nor does a trait have to be selected for to spread. For example, a deleterious recessive trait can be selected for due to close proximity to a beneficial dominant trait. It could also result from an interaction among multiple genes, any of which is beneficial in isolation. Or it could be beneficial when heterozygous and deleterious when homozygous, like sickle-cell anemia.

                1. As is often pointed out by people who, the rest of the time, disdain evolution. Odd, that.

                On that note, I wonder what the correlation is between people who are absolutely sure that homosexuality is genetic and those who downplay the stronger evidence for the genetic component of intelligence.Report

              • Kimmi in reply to Brandon Berg says:

                Brandon,
                I’m not convinced that we’re measuring intelligence correctly. I’m definitely convinced that anyone who wants to make racial arguments about intelligence based on the terribly biased sample we call “Americans” is full of fucking shit.Report