In February, Lee and Shanmugam launched a tool that did just that. Interview Coder’s website featured a banner that read F*CK LEETCODE. Lee posted a video of himself on YouTube using it to cheat his way through an internship interview with Amazon. (He actually got the internship, but turned it down.) A month later, Lee was called into Columbia’s academic-integrity office. The school put him on disciplinary probation after a committee found him guilty of “advertising a link to a cheating tool” and “providing students with the knowledge to access this tool and use it how they see fit,” according to the committee’s report.
Lee thought it absurd that Columbia, which had a partnership with ChatGPT’s parent company, OpenAI, would punish him for innovating with AI. Although Columbia’s policy on AI is similar to that of many other universities’ — students are prohibited from using it unless their professor explicitly permits them to do so, either on a class-by-class or case-by-case basis — Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing. “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” he said.
I’ve told this story before but, hell, I’ll tell it again.
A friend of mine applied for a job back in the early oughts and they gave him one of those $5 Lego kits to assemble. Just a simple little structure and a minifig.
“You want me to build this?”, he asked. “Yes”, the interviewer said.
They included the instructions and everything. He just laid out the instructions, built the structure, and presented the final product to the guy.
The interviewer nodded and started disassembling the lego set and putting it back in the box.
“You’d be surprised by how many people this weeds out”, he told my bud.Report
Sounds like a Black Mirror episode.Report
We’d have to see the people get weeded out to *REALLY* feel that.
(That said, I’m kind of suspicious that this sort of thing was made illegal by Griggs v. Duke Power Co, but he was an architect-kinda guy applying for something architecture-adjacent so I’m pretty sure that they could get this as being related to job duties.)Report
I don’t think that Griggs v. Duke Power Co says what you think it says.
Or, alternately, you think racial minorities are somehow more likely to be bad at legos, which…seems unlikely for anyone to think.Report
“Testing or measuring procedures cannot be determinative in employment decisions unless they have some connection to the job.”Report
The last part of my interview for a job with the Colorado legislature’s joint budget committee was to take a bunch of information and with a 45-minute time limit, write a recommendation on what the committee should do. One of the JBC rules was that staff presentations always end with a “Staff recommends…” statement. The most common reason new analysts left at the end of the first session — or sometimes mid-first session — was that they had to choose. Not lay out six options and stop. Lay them out and say, “Staff recommends option three because…”Report
Huh. You’d think that that’d be appealing.
“Staff recommends you do the thing that I talked about in my dorm room at 3 in the morning.”
“Staff recommends you outsource this to a contracting company owned by my brother-in-law.”Report
My second session was at the beginning of the Great Recession and a billion dollars of state revenue evaporated. Programs had to be cut. I was glad I wasn’t the staffer who said, “The governor’s budget office calculation is wrong. Staff recommends ten days of furlough for all non-essential personnel, not the requested three.” We took to letting all our calls go to voice messaging; having the people affected by the cuts we recommended scream at us in real time got old in a hurry.Report
I understand that is what people quote from it, and seem to think it says, but that sentence is completely out of context. The context is that it is the second of two sentences:
Even if there is no discriminatory intent, an employer may not use a job requirement that functionally excludes members of a certain race if it has no relation to measuring performance of job duties. Testing or measuring procedures cannot be determinative in employment decisions unless they have some connection to the job.
For more context, Griggs v. Duke Power Co is a holding about Title VII of the Civil Rights Act. Which only cares about discrimination. If the first sentence is not true, if there is no discriminatory outcome, the second sentence is completely irrelevant. Employers can require prospective employees do literally anything (Not barred elsewhere by law) as long as the outcome doesn’t ‘functionally excludes members of a certain race’.
And notice how strong that qualifier is. It’s not even ‘impacts more the members of a certain race’. It has to functionally exclude members of a certain race. Or, presumably, functionally exclude other categories protected by Title VII of the Civil Rights Act.
This ruling is pretty much entirely about intent. It says ‘You cannot set up unrelated tests that unintentionally discriminate. The fact it is unintentional does not matter.’. It is not intended to, surreally, bar all irrelevant employment tests from existing…the court cannot magically make a law about that! What would that even be illegal under?
(I do suspect they’d have a problem with mere disparate impact now, but Griggs v. Duke Power Co doesn’t say it.)Report
I suspect that part of the whole AI Apocalypse will result in employers having to put together Lego tests for prospective employees.
“We just want to see if you can read a document, summarize it, and write three lines of code based on its instructions.”
“But I have a degree in reading documents, summarization, and coding.”Report
“Also I have a mental disorder that causes me to be unable to read documents, summarize, and code when subject to a time limit or specific measureable outcomes”.Report
I guess Target was wrong to settle here, thenReport
Do you see the word ‘discriminatory’ in there? Or how this was done by the EEOC?
Target set up assessments that the government alleged ‘disproportionately screened out black, Asian, and female applicant’. And under Griggs v. Duke Power Co, the government doesn’t have to prove discriminatory _intent_ if those assessments were ‘not sufficiently job-related and consistent with business necessity’, which the government also alleged.
This is…literally, word for word, what I just explained the standards were under Griggs, and I even added a caveat that while Griggs said ‘functionally exclude’, I kinda doubt the courts would currently be happy with something that stopped only 40% of Black applicants and 10% of whites, or whatever. You know, the exact situation that Target found themselves in.
In case people are not following:
You can give all the irrational tests you want, involving Legos or writing essays or whatever, _as long as_ they do not have discriminatory outcomes. If they do end up having a discriminatory outcome, it doesn’t matter what your intent was. (Or, more relevantly to the law, the government doesn’t have to _prove_ your intent.)Report
“And under Griggs v. Duke Power Co, the government doesn’t have to prove discriminatory _intent_ if those assessments were ‘not sufficiently job-related and consistent with business necessity’, which the government also alleged.”
So if I have a test that black people fail at a disproportionate rate, but I can show some evidence that the test is related to job function, then that’s okay? You would accept that such a test does not constitute a Title VII violation?Report
We can always look at the Lego test.
Surely that’s something that we can’t imagine having significantly different numbers for success/failure (say, within 5% of each other), right?Report
“Some” is not quite right, but otherwise that’s accurate.Report
(you’re aware that this was the argument used in Griggs v. Duke Power Co, right?)Report
Employment cases are the largest part of my day job and everyone in the business knows and understands Griggs. I get the impression you think you are saying something that: (a) isn’t perfectly obvious and: (b) is some kind of gotcha. Either you don’t understand what you’re saying, or reading, or can’t make yourself understood. We’ve seen this movie before.Report
“Target set up assessments that the government alleged ‘disproportionately screened out black, Asian, and female applicant’. And under Griggs v. Duke Power Co, the government doesn’t have to prove discriminatory _intent_ if those assessments were ‘not sufficiently job-related and consistent with business necessity’, which the government also alleged.”
So why should have Target settled, other than “the government said so”?
Oh, because it would’ve been too expensive to take the case all the way through multiple levels of court and besides Target might have lost? Welp. That sure does seem like a statement that IQ tests are de facto illegal, doesn’t it?
But, hey, between you and Clement, you’ve established that anyone who decides not to exercise their rights to the fullest extent is just a whiny wimp, and that anything done by a private entity cannot in any way be the result of government influence or threats.Report
Oh, because it would’ve been too expensive to take the case all the way through multiple levels of court and besides Target might have lost? Welp. That sure does seem like a statement that IQ tests are de facto illegal, doesn’t it?
Um, no. But you go on rooting for people to fight to the last drop of their blood, not yours.
you’ve established that anyone who decides not to exercise their rights to the fullest extent is just a whiny wimp, and that anything done by a private entity cannot in any way be the result of government influence or threats.
Learn to f*****g read. Then get back to us.Report
Andrew has an episode devoted to this article. Check it out.Report
Lee said he doesn’t know a single student at the school who isn’t using AI to cheat.
Birds of a feather……………..Report
The cheating tools were a lot more primitive when I was in school but listening to tales of cheating always made it seem like you didn’t save that much time from studying the material yourself.Report
As I’ve said elsewhere, the issue is that we got used to the idea that “competently-written five-paragraph essay with only a few typos or run-on sentences, plenty of filler and light on analysis” means “B-minus”. And now there’s a computer that can produce that competently-written essay, grabbing bits and pieces from Reddit related to the subject so that it’s not entirely filler, so what do we do about that B-minus? Obviously we could just raise our standards, but what does that mean for the people who came by their B-minus honestly? (It would have some serious effects on academic eligibility for athletics!)
Or maybe we say “screw it, use AI, what we want to see is you finding actual examples to support the argument instead of just giving me AI-filler”. Which wouldn’t actually be a bad thing, I think, because that’s what is supposed to be happening here. It’s like giving someone a box of Lego bricks and saying “build a structure”, versus giving someone an unfinished log and carpentry tools and saying “build a structure”. The latter does demonstrate a wider range of skills, but a lot of those skills are unrelated to the actual building of a structure; most of it’s about prep work.
(On the gripping hand we’d have to address the fact that a lot of teachers probably aren’t good for much more than “competently-written five-paragraph essay with only a few typos or run-on sentences, plenty of filler and light on analysis”…)Report
Back in 2023, I mentioned how my friends’ 14-year-old (at the time) was delighted to show us how he could make the AI spit out 800 words on The Underground Railroad.
Now, to be fair, I rarely have to do stuff that involves The Underground Railroad. Like, the kiddo’s AI-written essay glanced at from across the room was the last time I had a real interaction with the concept. When it comes to being able to sit down and write multi-paragraph essays, well, I suppose that I do that sort of thing all the time and benefitted from decades of doing it and then doing it again and then doing it again.
I like to think that I’m good enough at it to break the syntactical rules in ways that engage the reader rather than alienate the reader (allowing the semantic content to do that).
And I don’t know whether I actually benefitted from getting good (or good enough) at it or whether it’s like how I learned to drive a stick or write cursive.Report
ah-ha, and I see I used nearly the exact same post here that I posted back then 😐Report
I suppose the solution is to go back to blue book exams.Report
The Chronicles of Higher Education has a cri de coeur: Is AI Enhancing Education or Replacing It?
It describes writing a paper thusly: “The assignment itself is a MacGuffin, with the shelf life of sour cream and an economic value that rounds to zero dollars.”
That’s actually one of the reasons I posted a bunch of my college papers here. I worked hard on those papers back in the 90s! If I could wring out a handful more readers by making blog posts of them, dang it, that’s a way to turn the sour cream into honey… and they found honey in the pyramids that was still edible.
The part of the essay that everybody is quoting is this part:
“You’re not always going to have a machine god in your pocket” seems a silly thing to say.Report