Question for Boneyard teachers, particularly high school teachers | Page 2 | The Boneyard

Question for Boneyard teachers, particularly high school teachers

I am switching from Teaching math for 17 years, but this is how I taught math. Direct instruction with time for individual practice. I hated group work as a student and as a teacher of math I hate it even more. You either know the math or you don't. A group won't help you learn the material.

My school wanted some sort of group work, think pair share stuff built in, but I don't do it and my scores on evaluations are fine.

Not sure what rating system you have, but we have effective or highly effective as the 2 that you need to get to be safe. The teachers that get highly effective really think they are the best, but the stories I've heard from their classrooms are amazingly terrible.
The rating system I've fallen backwards into at the middle school in this new district is.... Basic...

Show up, do your job, don't have kids / parents complain about you - all set

Never seen anything like it, but after the trials by fire I've been through, nice surprise.

Also to tack on to your point - the teachers that strive to get "highly effective" are the absolute worst, to a degree. A former colleague, a department head, taught her classes with stations and student led everything. She worked incredibly hard to set it up that way, and to be honest it's awful and the kids all hate her. Gotta find balance.
 
We are in the “Golden Age” of AI now, and it may be ending soon. Virtually all website security programs will include AI blockers going forward. As a result, it is going to get much harder and more expensive to update and train AI systems going forward. I also expect copyrights to explicitly deny authorization for use in AI models going forward. Not sure how it will effect teaching, but it will definitely impact AI.
The court system will not honor any of these copyrights. AI exists on fair use, according to the courts
 
As educators we’re going to have to tailor to teaching kids to use it appropriately vs just cutting it out
This is what our AI Department was supposed to be about.

But again, the way it is actually used has blown my mind.

Let me give you one example: I taught a class on Literature and War. A student decided for his final project he would write on the work of 4 poets who experienced war focusing on their use of the collective "we" referring to prisoners of war or survivors. In the first 2/3rds of a 15 page paper, he did OK work of analyzing poems and interpreting the significance of the pronouns related to traumatic experience. It was a solid B paper in any other year prior to AI. Then, suddenly, at about page 10, the student wrote that the problem with generative AI in analyzing the preceding poems is that it wasn't very nuanced and that it was missing a lot of play in the poems themselves, it was resorting to analogy too much. Although I suspected much of the paper was AI, I am not taking on the role of plagiarism detector. But it occurred to me that in this kid's mind there was absolutely nothing wrong with taking the entirety of an AI generated essay and presenting it as his own. If he thought it was wrong, he wouldn't have admitted in the very same paper that this is what he had done. But curiously he then started to critique the AI interpretation of the poems. It's like some kind of critical impulse took over his brain and the last 5 pages were just ripping the AI analysis apart. For those of us in English, this is a classic LitCrit move. "This critic says this, and this is why he is wrong!" At that point I was scratching my head and trying to figure out what it was that I had in front of me, since part of me appreciated the last 5 pages. I was blown away. It was clear to me that students have jumped over an entire ethical realm that we believe exists naturally. It doesn't.
 
I saw a very interesting speaker a few years back speak about the future expectations of AI (Pascal Finette).

He said that we should not be focusing on teaching 5-6 year olds how to memorize their multiplication tables or the state capitals. There is no use for that type of thinking in the future. You can literally ask that question into the air and get it back in a split second. Instead we should focus on how they use critical thinking and use technology to solve problems since it's only becoming more available, more detailed and accurate and faster.

My daughter used ChatGPT to create a LinkedIn profile. It took her probably 1/10th the time and it's very polished and well written. If I were an employer and came upon it, I would be impressed. Is that "cheating" because she didn't think of her own sentence to describe something? Or is it fully utilizing the tools available to her so she can provide a better, faster, more accurate result?

Here's a similar presentation by him:
 
I saw a very interesting speaker a few years back speak about the future expectations of AI (Pascal Finette).

He said that we should not be focusing on teaching 5-6 year olds how to memorize their multiplication tables or the state capitals. There is no use for that type of thinking in the future. You can literally ask that question into the air and get it back in a split second. Instead we should focus on how they use critical thinking and use technology to solve problems since it's only becoming more available, more detailed and accurate and faster.

My daughter used ChatGPT to create a LinkedIn profile. It took her probably 1/10th the time and it's very polished and well written. If I were an employer and came upon it, I would be impressed. Is that "cheating" because she didn't think of her own sentence to describe something? Or is it fully utilizing the tools available to her so she can provide a better, faster, more accurate result?

Here's a similar presentation by him:

The problem is that people who never learn how to write are going to have their ability to express themselves diminished. They may be able to use the LLMs words very adeptly, but there is a limitation.

Then there's the greater problem, which we see happening now on X. As the LLMs are trained on writing largely generated by other LLMs or itself, the feedback will cause a degradation in information. On X.com we see this happening with the AI right now which has lost its mind because it has been feeding on X.com posts for quite a while.

That's a microcosm of what will happen to all of us if we rely on AI to generate all of our writing.
 
Good for you what are you going to teach?
The quickest way to a teaching job is in special education. Also, there are programs like University of Bridgeport that can help you get a teaching certificate and placement. Also, sub teach. Make sure it is something you want to do. Long term subs are a good way to get an idea what you are going going to do.
 
.-.
The problem is that people who never learn how to write are going to have their ability to express themselves diminished. They may be able to use the LLMs words very adeptly, but there is a limitation.
Why is that a negative? I'm already using AI to review emails and construct replies. Or review documents to provide edits. Every meeting I'm in automatically has meeting minutes instantly generated with takeaways for everyone.

Should we give kids a quill and inkwell? Or a piece of coal on the back of a shovel?
 
Every meeting I'm in automatically has meeting minutes instantly generated with takeaways for everyone.
How does this work?
 
Good for you what are you going to teach?
I haven’t quite got there yet but when I was young I was very much into creative writing and have fond memories of mentors who nurtured that interest. I’d like to pay that forward. I could see highschool English being my path.

I could also see myself becoming an adjunct professor of B school type courses. I have my MBA in finance and MIS and a career in operations leadership.

Right now I’m just enjoying a bit of a break.
 
How does this work?
Microsoft Co-Pilot is installed and attached to every Teams meeting. It knows who's online and which person is speaking. Once the meeting ends, it auto-generates a concise review of the call with any applicable next steps and takeaways.

I can also receive a powerpoint deck from someone else, drop it in Co-Pilot and ask for the main bullets or things to know from the deck and it gives you that. Or, I can point Co-Pilot to a folder with a bunch of information in it and ask them to build me a 6 page powerpoint deck based off the info and it does that instantly. That's probably 70% usable and needs editing, but considering how much time is wasted on a powerpoint deck that gets presented once and never used again? More than worthwhile.
 
Another funny/interesting story. I was recently asked to create a Vision statement and a Mission Statement. It was sent to me Tuesday around 2 pm and due by end of day Thursday.

I immediately dropped the request into Co-Pilot. Got 100% usable statements for both. Knowing I couldn't send it immediately, I asked Co-Pilot to send the email Thursday at 3:30 pm. I was literally done with little to no actual "work" in 3 minutes.
 
My Grandkids tell me the teachers know when someone has used AI

That falls more on the kid than the AI. A sufficiently clever AI prompt will fool even the most astute teachers.
 
.-.
Why is that a negative? I'm already using AI to review emails and construct replies. Or review documents to provide edits. Every meeting I'm in automatically has meeting minutes instantly generated with takeaways for everyone.

Should we give kids a quill and inkwell? Or a piece of coal on the back of a shovel?
The device doesn't matter.
What matters is the writing.
 
But you wouldn’t know who wrote it. Me or AI.
I thought you were responding specifically to my point about writing & expression skills being diminished.

LLMs are doing such a great job because of the many books & specialist studies they've been fed. Over time, the writing in LLMs will degrade (NOTE: I'm referring to LLMs specifically; what AGI will do in the future in terms of the workforce is another discussion altogether
 
We are in the “Golden Age” of AI now, and it may be ending soon. Virtually all website security programs will include AI blockers going forward. As a result, it is going to get much harder and more expensive to update and train AI systems going forward. I also expect copyrights to explicitly deny authorization for use in AI models going forward. Not sure how it will effect teaching, but it will definitely impact AI.
AI could end the world if it advances too much more, and I say that seriously. It is very scary.
 
Small highjack of this thread: Would you go into teaching today as a late career hire?

I left my job at 42 this year. I had originally gone to school for English and Psych with the intention to teach. Way led to way and I ended up doing well enough in supply chain to be able to have lots of options now.

I have a 1.5 year old and a baby on the way in September and I want to be able to prioritize my family.

I was thinking of finally getting into teaching when I return to work in the next year or so, but one small peek at the teachers subreddit had me wide eyed.

For you career teachers, am I crazy? Is it possible to do that job for the love of it in 2025?
Bednutz! (AJ here) Please do. We need as many smart and personable teachers as we can get, and if they have real world experience all the better. All the challenges you read about are real, and I still wouldn’t trade it for anything (having transitioned from a job as an attorney 14 years ago). Now I teach at an underperforming urban high school and it’s the most rewarding thing I can think of doing with my time.
 
I don’t think I’ve ever used AI for anything. I feel like an old millennial that tells AI to get off my lawn.

Story time:
My wife and I were deciding what to do about our fireplace mantle. We’re doing home upgrades. It took me about 5-10 minutes to draw up a couple ideas. It looked like crap to her but I put some effort in and it came out alright. It’s the effort, right? Anyways, my smokin ballnchain uses AI in two seconds to come up with our design. Stupid AI… showing me up. No more AI in our house (kidding)… my drawings are crumpled up in the trash next to my dignity and hard work. AI took my effort away. Moral of story is to work smarter not harder. But damn, I felt useless which sucked haha.
 
.-.
Why is that a negative? I'm already using AI to review emails and construct replies. Or review documents to provide edits. Every meeting I'm in automatically has meeting minutes instantly generated with takeaways for everyone.

Should we give kids a quill and inkwell? Or a piece of coal on the back of a shovel?
This is the other end of the argument. I don’t think the question is if they will use AI, but when do we want them to start… While they are learning to critically think, research and put together their own ideas or before they begin this process?

The issue I have with AI use in K-12 school is one thing AI is great at is just sending you down the same well traveled road as everyone else.

Let’s tackle it this way: Picture someone at the top of Mt. Snow in January. A world of possibilities.

But all ski the same exact single well established path - one trail - leaving the rest of the mountain untouched.

No one discovers a better way to go down that mountain or different always to approach that mountain. Nope just one hard packed trail that everyone travels down.

So, I am no arguing this point but only looking at the flip side of the coin. The question to me is when do you want this to start to add AI to their learning? is it going to be the Spice of learning or the main course?

We need to decide - think standards need to be adopted.
 
This is the other end of the argument. I don’t think the question is if they will use AI, but when do we want them to start… While they are learning to critically think, research and put together their own ideas or before they begin this process?

The issue I have with AI use in K-12 school is one thing AI is great at is just sending you down the same well traveled road as everyone else.

Let’s tackle it this way: Picture someone at the top of Mt. Snow in January. A world of possibilities.

But all ski the same exact single well established path - one trail - leaving the rest of the mountain untouched.

No one discovers a better way to go down that mountain or different always to approach that mountain. Nope just one hard packed trail that everyone travels down.

So, I am no arguing this point but only looking at the flip side of the coin. The question to me is when do you want this to start to add AI to their learning? is it going to be the Spice of learning or the main course?

We need to decide - think standards need to be adopted.
I don't think I understand....or I need to pop a gummy.

Why would everyone ski the exact same path? Some would ask AI what the easiest path was....or the hardest path....or the most scenic path....or how they should go down on a sled or a cardboard box or backwards? And AI would generate the response based off the input.

And, honestly, that's almost a moot point to discuss. Very soon (it's happening already) AI will solve your problems before you know they exist. Think of Waze re-routing you around an accident. In the past, you'd be sitting in standstill traffic for an hour. Last week, my fitness tracker (Whoop) asked me "Have you started a round of golf? Would you like me to track it?" It must have know where I was via GPS and how my body was reacting/moving based on past golf rounds. That type of thing is going to be all day every day part of everyone's life.
 
I don't think I understand....or I need to pop a gummy.

Why would everyone ski the exact same path? Some would ask AI what the easiest path was....or the hardest path....or the most scenic path....or how they should go down on a sled or a cardboard box or backwards? And AI would generate the response based off the input.

And, honestly, that's almost a moot point to discuss. Very soon (it's happening already) AI will solve your problems before you know they exist. Think of Waze re-routing you around an accident. In the past, you'd be sitting in standstill traffic for an hour. Last week, my fitness tracker (Whoop) asked me "Have you started a round of golf? Would you like me to track it?" It must have know where I was via GPS and how my body was reacting/moving based on past golf rounds. That type of thing is going to be all day every day part of everyone's life.
I think maybe the point that needs to be emphasized is that the critical thinking skills that allowed all that information to be created and then uploaded to ChatGPT in the first place, that is what we are losing.

Students are going to just rely on what's been done in the past and based the old thinking... We lose those creative thought processes / skills that put things together in unique ways. Again it's not that AI will never be used. It's about when.

Just remember, AI is just a collection of what other people think. When they start to think less over time we all get just a little bit dumber...

my suggestion would be they should start using at age 24
 
I don't think I understand....or I need to pop a gummy.

Why would everyone ski the exact same path? Some would ask AI what the easiest path was....or the hardest path....or the most scenic path....or how they should go down on a sled or a cardboard box or backwards? And AI would generate the response based off the input.

And, honestly, that's almost a moot point to discuss. Very soon (it's happening already) AI will solve your problems before you know they exist. Think of Waze re-routing you around an accident. In the past, you'd be sitting in standstill traffic for an hour. Last week, my fitness tracker (Whoop) asked me "Have you started a round of golf? Would you like me to track it?" It must have know where I was via GPS and how my body was reacting/moving based on past golf rounds. That type of thing is going to be all day every day part of everyone's life.
Solving problems before they happen sounds great and also pretty damn dystopian. I'm not sure how you build any resolve or coping skills if you don't face adversity. If you drive blindfolded and the AI stops you from crashing and gets you to your destination, you'll never get better at driving. Now extend this to every facet of life and you'll see where the problem is.
 
Solving problems before they happen sounds great and also pretty damn dystopian. I'm not sure how you build any resolve or coping skills if you don't face adversity. If you drive blindfolded and the AI stops you from crashing and gets you to your destination, you'll never get better at driving. Now extend this to every facet of life and you'll see where the problem is.

Younger people may use it to learn. Older people may use it and need it to adapt to their own changing world.
 
Solving problems before they happen sounds great and also pretty damn dystopian. I'm not sure how you build any resolve or coping skills if you don't face adversity. If you drive blindfolded and the AI stops you from crashing and gets you to your destination, you'll never get better at driving. Now extend this to every facet of life and you'll see where the problem is.

But what if you never have to drive someday?
 
.-.
But what if you never have to drive someday?
What else will you not have to do? Do you just wake up and the AI has an agenda set out for you and walks you through all the paces of your day? It cooks for you, feeds you, takes you to your place of employment. Does your job, makes small talk with your coworkers throughout the day, brings you to a restaurant after where the AI has already ordered for you and invited your spouse/friend/family to dinner. Then it takes you home and puts on content that it generated specifically for you, and turns it off when it determines you need to go to sleep? Then you wake up and do it all over again?

Now let's say that's your life from birth until one day something goes wrong. How do you deal with that thing going wrong when you're so used to the AI providing.
 
Do you just wake up and the AI has an agenda set out for you and walks you through all the paces of your day? It cooks for you, feeds you, takes you to your place of employment. Does your job, makes small talk with your coworkers throughout the day, brings you to a restaurant after where the AI has already ordered for you and invited your spouse/friend/family to dinner. Then it takes you home and puts on content that it generated specifically for you, and turns it off when it determines you need to go to sleep? Then you wake up and do it all over again?
Season 7 Reaction GIF by The Office
 
What else will you not have to do? Do you just wake up and the AI has an agenda set out for you and walks you through all the paces of your day? It cooks for you, feeds you, takes you to your place of employment. Does your job, makes small talk with your coworkers throughout the day, brings you to a restaurant after where the AI has already ordered for you and invited your spouse/friend/family to dinner. Then it takes you home and puts on content that it generated specifically for you, and turns it off when it determines you need to go to sleep? Then you wake up and do it all over again?

Now let's say that's your life from birth until one day something goes wrong. How do you deal with that thing going wrong when you're so used to the AI providing.

Okay. Well compare your life today to life of a male in the 1950s. You’re not complaining about riding lawn mowers or laptops where you can work from home or GPS or all the other things that are now exponentially easier. They’d look at you just like you’re looking at this future view you paint.
 
I don't think I understand....or I need to pop a gummy.

Why would everyone ski the exact same path? Some would ask AI what the easiest path was....or the hardest path....or the most scenic path....or how they should go down on a sled or a cardboard box or backwards? And AI would generate the response based off the input.

And, honestly, that's almost a moot point to discuss. Very soon (it's happening already) AI will solve your problems before you know they exist. Think of Waze re-routing you around an accident. In the past, you'd be sitting in standstill traffic for an hour. Last week, my fitness tracker (Whoop) asked me "Have you started a round of golf? Would you like me to track it?" It must have know where I was via GPS and how my body was reacting/moving based on past golf rounds. That type of thing is going to be all day every day part of everyone's life.
Sounds like hell.
 
.-.

Forum statistics

Threads
168,158
Messages
4,555,332
Members
10,440
Latest member
Regan23


Top Bottom