Would you support your child in pursuing a degree in English Literature?

Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Yes. I'd encourage them to research what career they could develop from that major. I've been seeing more and more jobs seeking those with journalism and English degrees, which surprised me.

For example, technical writer, various communications professionals, digital marketers/copywriters.

B2B copywriters can earn a lot.

One of the most successful people I know from college was a theatre major earning well over $200K in communications. A personality and network will do a lot to help. This person has a way with people.


Generative AI says hi


NP. I write and edit blogs and web materials. Guess what? Companies are turning to AI and then turning right back to humans to edit AI-generated materials because those materials still sound bot-like and so often are inaccurate, too. And before you chime in with "AI will soon improve until you're not needed!" -- sure, it'll improve, but only by being "trained" by human writers and editors.


Here is an excerpt from a U Wisconsin philosophy professor...decide if you think AI will dramatically change things or not:

"ChatGPT has many of my university colleagues shaking in their Birkenstocks. This artificial-intelligence tool excels at producing grammatical and even insightful essays — just what we’re hoping to see from our undergraduates. How good is it, really? A friend asked ChatGPT to write an essay about “multiple realization.” This is an important topic in the course I teach on the philosophy of mind, having to do with the possibility that minds might be constructed in ways other than our own brains. The essay ran shorter than the assigned word count, but I would have given it an A grade. Apparently ChatGPT is good enough to create an A-level paper on a topic that’s hardly mainstream."

I think this says more about the deflated standards of modern Academia. I talk often with professors at the LAC I graduated from. They say ChatGPT can hardly write a C level paper every time they've tried it out as a department. They even have pro-AI syllabi now, because the students who use it don't do better.


That assertion can't be supported whatsoever. I doubt the LAC professors know how to train the model, nor are they hoping it produces great work...because if it does, it cuts to the core of their identity.

Disparaging other Academics really isn't a winning argument.

They invite students into those particular department meetings and get them to train it for them. You should read up on the reddit forum r/professors. A lot of state schools have dropped their academic standards for the humanities and have even pressured academics to grade students higher.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Yes. I'd encourage them to research what career they could develop from that major. I've been seeing more and more jobs seeking those with journalism and English degrees, which surprised me.

For example, technical writer, various communications professionals, digital marketers/copywriters.

B2B copywriters can earn a lot.

One of the most successful people I know from college was a theatre major earning well over $200K in communications. A personality and network will do a lot to help. This person has a way with people.


Generative AI says hi


NP. I write and edit blogs and web materials. Guess what? Companies are turning to AI and then turning right back to humans to edit AI-generated materials because those materials still sound bot-like and so often are inaccurate, too. And before you chime in with "AI will soon improve until you're not needed!" -- sure, it'll improve, but only by being "trained" by human writers and editors.


Here is an excerpt from a U Wisconsin philosophy professor...decide if you think AI will dramatically change things or not:

"ChatGPT has many of my university colleagues shaking in their Birkenstocks. This artificial-intelligence tool excels at producing grammatical and even insightful essays — just what we’re hoping to see from our undergraduates. How good is it, really? A friend asked ChatGPT to write an essay about “multiple realization.” This is an important topic in the course I teach on the philosophy of mind, having to do with the possibility that minds might be constructed in ways other than our own brains. The essay ran shorter than the assigned word count, but I would have given it an A grade. Apparently ChatGPT is good enough to create an A-level paper on a topic that’s hardly mainstream."

I think this says more about the deflated standards of modern Academia. I talk often with professors at the LAC I graduated from. They say ChatGPT can hardly write a C level paper every time they've tried it out as a department. They even have pro-AI syllabi now, because the students who use it don't do better.


That assertion can't be supported whatsoever. I doubt the LAC professors know how to train the model, nor are they hoping it produces great work...because if it does, it cuts to the core of their identity.

Disparaging other Academics really isn't a winning argument.

They invite students into those particular department meetings and get them to train it for them. You should read up on the reddit forum r/professors. A lot of state schools have dropped their academic standards for the humanities and have even pressured academics to grade students higher.


Sure…I will accept a Reddit forum as authoritative. You realize how dumb that sounds?

Maybe they could actually hire some prompt engineers to train the LLM…you do realize that LAC students probably don’t have advanced/graduate ML or linguistics skills because a LAC doesn’t have graduate programs. Also, it takes longer than a department meeting to train the LLM (although I imagine those are long, painful meetings).
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Yes. I'd encourage them to research what career they could develop from that major. I've been seeing more and more jobs seeking those with journalism and English degrees, which surprised me.

For example, technical writer, various communications professionals, digital marketers/copywriters.

B2B copywriters can earn a lot.

One of the most successful people I know from college was a theatre major earning well over $200K in communications. A personality and network will do a lot to help. This person has a way with people.


Generative AI says hi


NP. I write and edit blogs and web materials. Guess what? Companies are turning to AI and then turning right back to humans to edit AI-generated materials because those materials still sound bot-like and so often are inaccurate, too. And before you chime in with "AI will soon improve until you're not needed!" -- sure, it'll improve, but only by being "trained" by human writers and editors.


Here is an excerpt from a U Wisconsin philosophy professor...decide if you think AI will dramatically change things or not:

"ChatGPT has many of my university colleagues shaking in their Birkenstocks. This artificial-intelligence tool excels at producing grammatical and even insightful essays — just what we’re hoping to see from our undergraduates. How good is it, really? A friend asked ChatGPT to write an essay about “multiple realization.” This is an important topic in the course I teach on the philosophy of mind, having to do with the possibility that minds might be constructed in ways other than our own brains. The essay ran shorter than the assigned word count, but I would have given it an A grade. Apparently ChatGPT is good enough to create an A-level paper on a topic that’s hardly mainstream."

I think this says more about the deflated standards of modern Academia. I talk often with professors at the LAC I graduated from. They say ChatGPT can hardly write a C level paper every time they've tried it out as a department. They even have pro-AI syllabi now, because the students who use it don't do better.


That assertion can't be supported whatsoever. I doubt the LAC professors know how to train the model, nor are they hoping it produces great work...because if it does, it cuts to the core of their identity.

Disparaging other Academics really isn't a winning argument.

They invite students into those particular department meetings and get them to train it for them. You should read up on the reddit forum r/professors. A lot of state schools have dropped their academic standards for the humanities and have even pressured academics to grade students higher.


Sure…I will accept a Reddit forum as authoritative. You realize how dumb that sounds?

Maybe they could actually hire some prompt engineers to train the LLM…you do realize that LAC students probably don’t have advanced/graduate ML or linguistics skills because a LAC doesn’t have graduate programs. Also, it takes longer than a department meeting to train the LLM (although I imagine those are long, painful meetings).

I think you're just unknowledgeable and assuse that insults give you authority to speak on this, yeah. Academics love social media. The main way to communicate with other academics is Twitter (or X now or whatever Musk will call it soon). Reddit is where academics go to talk about these issues too. Did you think they conversed through stone tablets and Academia messages?

I don't think any department is prompt engineering AI. But they are testing to see what its worth with an open mind. Most of the department entered scared out of their mind that it'd produce something substantive and at the level of the students and left very comfortable with the results. ChatGPT results is
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Yes. I'd encourage them to research what career they could develop from that major. I've been seeing more and more jobs seeking those with journalism and English degrees, which surprised me.

For example, technical writer, various communications professionals, digital marketers/copywriters.

B2B copywriters can earn a lot.

One of the most successful people I know from college was a theatre major earning well over $200K in communications. A personality and network will do a lot to help. This person has a way with people.


Generative AI says hi


NP. I write and edit blogs and web materials. Guess what? Companies are turning to AI and then turning right back to humans to edit AI-generated materials because those materials still sound bot-like and so often are inaccurate, too. And before you chime in with "AI will soon improve until you're not needed!" -- sure, it'll improve, but only by being "trained" by human writers and editors.


Here is an excerpt from a U Wisconsin philosophy professor...decide if you think AI will dramatically change things or not:

"ChatGPT has many of my university colleagues shaking in their Birkenstocks. This artificial-intelligence tool excels at producing grammatical and even insightful essays — just what we’re hoping to see from our undergraduates. How good is it, really? A friend asked ChatGPT to write an essay about “multiple realization.” This is an important topic in the course I teach on the philosophy of mind, having to do with the possibility that minds might be constructed in ways other than our own brains. The essay ran shorter than the assigned word count, but I would have given it an A grade. Apparently ChatGPT is good enough to create an A-level paper on a topic that’s hardly mainstream."

I think this says more about the deflated standards of modern Academia. I talk often with professors at the LAC I graduated from. They say ChatGPT can hardly write a C level paper every time they've tried it out as a department. They even have pro-AI syllabi now, because the students who use it don't do better.


That assertion can't be supported whatsoever. I doubt the LAC professors know how to train the model, nor are they hoping it produces great work...because if it does, it cuts to the core of their identity.

Disparaging other Academics really isn't a winning argument.

They invite students into those particular department meetings and get them to train it for them. You should read up on the reddit forum r/professors. A lot of state schools have dropped their academic standards for the humanities and have even pressured academics to grade students higher.


Sure…I will accept a Reddit forum as authoritative. You realize how dumb that sounds?

Maybe they could actually hire some prompt engineers to train the LLM…you do realize that LAC students probably don’t have advanced/graduate ML or linguistics skills because a LAC doesn’t have graduate programs. Also, it takes longer than a department meeting to train the LLM (although I imagine those are long, painful meetings).

I think you're just unknowledgeable and assuse that insults give you authority to speak on this, yeah. Academics love social media. The main way to communicate with other academics is Twitter (or X now or whatever Musk will call it soon). Reddit is where academics go to talk about these issues too. Did you think they conversed through stone tablets and Academia messages?

I don't think any department is prompt engineering AI. But they are testing to see what its worth with an open mind. Most of the department entered scared out of their mind that it'd produce something substantive and at the level of the students and left very comfortable with the results. ChatGPT results is


Sounds like once more you have shown complete ignorance on the topic.

You don’t understand how to train the AI and you keep giving examples of horribly 1/2-assed attempts to train it. Please, just stop and admit you don’t understand it.

It is still quite odd that you would disparage a philosophy professor who had no vested interest in chatGPT producing quality work, but now you claim that it’s really the professor and his lack of evaluating quality writing that is at fault.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Yes. I'd encourage them to research what career they could develop from that major. I've been seeing more and more jobs seeking those with journalism and English degrees, which surprised me.

For example, technical writer, various communications professionals, digital marketers/copywriters.

B2B copywriters can earn a lot.

One of the most successful people I know from college was a theatre major earning well over $200K in communications. A personality and network will do a lot to help. This person has a way with people.


Generative AI says hi


NP. I write and edit blogs and web materials. Guess what? Companies are turning to AI and then turning right back to humans to edit AI-generated materials because those materials still sound bot-like and so often are inaccurate, too. And before you chime in with "AI will soon improve until you're not needed!" -- sure, it'll improve, but only by being "trained" by human writers and editors.


Here is an excerpt from a U Wisconsin philosophy professor...decide if you think AI will dramatically change things or not:

"ChatGPT has many of my university colleagues shaking in their Birkenstocks. This artificial-intelligence tool excels at producing grammatical and even insightful essays — just what we’re hoping to see from our undergraduates. How good is it, really? A friend asked ChatGPT to write an essay about “multiple realization.” This is an important topic in the course I teach on the philosophy of mind, having to do with the possibility that minds might be constructed in ways other than our own brains. The essay ran shorter than the assigned word count, but I would have given it an A grade. Apparently ChatGPT is good enough to create an A-level paper on a topic that’s hardly mainstream."

I think this says more about the deflated standards of modern Academia. I talk often with professors at the LAC I graduated from. They say ChatGPT can hardly write a C level paper every time they've tried it out as a department. They even have pro-AI syllabi now, because the students who use it don't do better.


That assertion can't be supported whatsoever. I doubt the LAC professors know how to train the model, nor are they hoping it produces great work...because if it does, it cuts to the core of their identity.

Disparaging other Academics really isn't a winning argument.

They invite students into those particular department meetings and get them to train it for them. You should read up on the reddit forum r/professors. A lot of state schools have dropped their academic standards for the humanities and have even pressured academics to grade students higher.


Sure…I will accept a Reddit forum as authoritative. You realize how dumb that sounds?

Maybe they could actually hire some prompt engineers to train the LLM…you do realize that LAC students probably don’t have advanced/graduate ML or linguistics skills because a LAC doesn’t have graduate programs. Also, it takes longer than a department meeting to train the LLM (although I imagine those are long, painful meetings).

I think you're just unknowledgeable and assuse that insults give you authority to speak on this, yeah. Academics love social media. The main way to communicate with other academics is Twitter (or X now or whatever Musk will call it soon). Reddit is where academics go to talk about these issues too. Did you think they conversed through stone tablets and Academia messages?

I don't think any department is prompt engineering AI. But they are testing to see what its worth with an open mind. Most of the department entered scared out of their mind that it'd produce something substantive and at the level of the students and left very comfortable with the results. ChatGPT results is


Sounds like once more you have shown complete ignorance on the topic.

You don’t understand how to train the AI and you keep giving examples of horribly 1/2-assed attempts to train it. Please, just stop and admit you don’t understand it.

It is still quite odd that you would disparage a philosophy professor who had no vested interest in chatGPT producing quality work, but now you claim that it’s really the professor and his lack of evaluating quality writing that is at fault.

What access to prompt engineering do you think students have that professors don't? There's literally only two demographics in this discussion to worry about.
No one is disparaging anyone. I'm sure they're great at philosophy, whoever this unnamed UWisconsin Philosophy Professor is.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Yes. I'd encourage them to research what career they could develop from that major. I've been seeing more and more jobs seeking those with journalism and English degrees, which surprised me.

For example, technical writer, various communications professionals, digital marketers/copywriters.

B2B copywriters can earn a lot.

One of the most successful people I know from college was a theatre major earning well over $200K in communications. A personality and network will do a lot to help. This person has a way with people.


Generative AI says hi


NP. I write and edit blogs and web materials. Guess what? Companies are turning to AI and then turning right back to humans to edit AI-generated materials because those materials still sound bot-like and so often are inaccurate, too. And before you chime in with "AI will soon improve until you're not needed!" -- sure, it'll improve, but only by being "trained" by human writers and editors.


Here is an excerpt from a U Wisconsin philosophy professor...decide if you think AI will dramatically change things or not:

"ChatGPT has many of my university colleagues shaking in their Birkenstocks. This artificial-intelligence tool excels at producing grammatical and even insightful essays — just what we’re hoping to see from our undergraduates. How good is it, really? A friend asked ChatGPT to write an essay about “multiple realization.” This is an important topic in the course I teach on the philosophy of mind, having to do with the possibility that minds might be constructed in ways other than our own brains. The essay ran shorter than the assigned word count, but I would have given it an A grade. Apparently ChatGPT is good enough to create an A-level paper on a topic that’s hardly mainstream."

I think this says more about the deflated standards of modern Academia. I talk often with professors at the LAC I graduated from. They say ChatGPT can hardly write a C level paper every time they've tried it out as a department. They even have pro-AI syllabi now, because the students who use it don't do better.


That assertion can't be supported whatsoever. I doubt the LAC professors know how to train the model, nor are they hoping it produces great work...because if it does, it cuts to the core of their identity.

Disparaging other Academics really isn't a winning argument.

They invite students into those particular department meetings and get them to train it for them. You should read up on the reddit forum r/professors. A lot of state schools have dropped their academic standards for the humanities and have even pressured academics to grade students higher.


Sure…I will accept a Reddit forum as authoritative. You realize how dumb that sounds?

Maybe they could actually hire some prompt engineers to train the LLM…you do realize that LAC students probably don’t have advanced/graduate ML or linguistics skills because a LAC doesn’t have graduate programs. Also, it takes longer than a department meeting to train the LLM (although I imagine those are long, painful meetings).

I think you're just unknowledgeable and assuse that insults give you authority to speak on this, yeah. Academics love social media. The main way to communicate with other academics is Twitter (or X now or whatever Musk will call it soon). Reddit is where academics go to talk about these issues too. Did you think they conversed through stone tablets and Academia messages?

I don't think any department is prompt engineering AI. But they are testing to see what its worth with an open mind. Most of the department entered scared out of their mind that it'd produce something substantive and at the level of the students and left very comfortable with the results. ChatGPT results is


Sounds like once more you have shown complete ignorance on the topic.

You don’t understand how to train the AI and you keep giving examples of horribly 1/2-assed attempts to train it. Please, just stop and admit you don’t understand it.

It is still quite odd that you would disparage a philosophy professor who had no vested interest in chatGPT producing quality work, but now you claim that it’s really the professor and his lack of evaluating quality writing that is at fault.

What access to prompt engineering do you think students have that professors don't? There's literally only two demographics in this discussion to worry about.
No one is disparaging anyone. I'm sure they're great at philosophy, whoever this unnamed UWisconsin Philosophy Professor is.


They’re LAC professors that don’t have graduate programs, right? Isn’t that the literal definition of a LAC? So, they aren’t doing advanced research on the topic nor do they have graduate-level students doing work on this. Again, that’s correct, right?

The professor isn’t un-named. His name is Lawrence Shapiro.

Why can’t you admit the obvious…you don’t understand the technology yet you keep flailing.
Anonymous
Anonymous wrote:I mean, I’m paying for a music major so I don’t see why not. In fact, I think humanities majors will have much less competition now that all these parents are forbidding their students from it.

I think the question is what kind of job can your kid find with just a humanities undergrad degree.

Most grads don't come from family money. They need to find a job and pay the rent. Look how many millennials are living with their parents after graduation. It's expensive to live on your own and financially take care of yourself.
Anonymous
Yes, 100%. I wanted to go to college to study literature and to write. It was not allowed. All I ever did was read and write. I read Tolstoy in 8th grade. But to appease my parents, I went into psychology, a mistake as I hated it.

I am good at being an academic; I eventually got a Master's degree in History. I don't know what I would have been if I studied literature, but I know I would be good at being a professor.
Anonymous
I have an English degree. In 20+ years of career I’ve worked in ad agencies, journalism and now am in corporate communications for a government agency where I’ve progressed into a senior role. There are many jobs and career paths for this degree.

I would suggest choosing a minor that is related to whatever field of communications they might like to work in eventually (eg do they want to become a financial journalist or perhaps work in corporate communications in the banking field? Purse a commerce undergrad)
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Yes. I'd encourage them to research what career they could develop from that major. I've been seeing more and more jobs seeking those with journalism and English degrees, which surprised me.

For example, technical writer, various communications professionals, digital marketers/copywriters.

B2B copywriters can earn a lot.

One of the most successful people I know from college was a theatre major earning well over $200K in communications. A personality and network will do a lot to help. This person has a way with people.


Generative AI says hi


NP. I write and edit blogs and web materials. Guess what? Companies are turning to AI and then turning right back to humans to edit AI-generated materials because those materials still sound bot-like and so often are inaccurate, too. And before you chime in with "AI will soon improve until you're not needed!" -- sure, it'll improve, but only by being "trained" by human writers and editors.


Here is an excerpt from a U Wisconsin philosophy professor...decide if you think AI will dramatically change things or not:

"ChatGPT has many of my university colleagues shaking in their Birkenstocks. This artificial-intelligence tool excels at producing grammatical and even insightful essays — just what we’re hoping to see from our undergraduates. How good is it, really? A friend asked ChatGPT to write an essay about “multiple realization.” This is an important topic in the course I teach on the philosophy of mind, having to do with the possibility that minds might be constructed in ways other than our own brains. The essay ran shorter than the assigned word count, but I would have given it an A grade. Apparently ChatGPT is good enough to create an A-level paper on a topic that’s hardly mainstream."

I think this says more about the deflated standards of modern Academia. I talk often with professors at the LAC I graduated from. They say ChatGPT can hardly write a C level paper every time they've tried it out as a department. They even have pro-AI syllabi now, because the students who use it don't do better.


That assertion can't be supported whatsoever. I doubt the LAC professors know how to train the model, nor are they hoping it produces great work...because if it does, it cuts to the core of their identity.

Disparaging other Academics really isn't a winning argument.

They invite students into those particular department meetings and get them to train it for them. You should read up on the reddit forum r/professors. A lot of state schools have dropped their academic standards for the humanities and have even pressured academics to grade students higher.


Sure…I will accept a Reddit forum as authoritative. You realize how dumb that sounds?

Maybe they could actually hire some prompt engineers to train the LLM…you do realize that LAC students probably don’t have advanced/graduate ML or linguistics skills because a LAC doesn’t have graduate programs. Also, it takes longer than a department meeting to train the LLM (although I imagine those are long, painful meetings).

I think you're just unknowledgeable and assuse that insults give you authority to speak on this, yeah. Academics love social media. The main way to communicate with other academics is Twitter (or X now or whatever Musk will call it soon). Reddit is where academics go to talk about these issues too. Did you think they conversed through stone tablets and Academia messages?

I don't think any department is prompt engineering AI. But they are testing to see what its worth with an open mind. Most of the department entered scared out of their mind that it'd produce something substantive and at the level of the students and left very comfortable with the results. ChatGPT results is


Sounds like once more you have shown complete ignorance on the topic.

You don’t understand how to train the AI and you keep giving examples of horribly 1/2-assed attempts to train it. Please, just stop and admit you don’t understand it.

It is still quite odd that you would disparage a philosophy professor who had no vested interest in chatGPT producing quality work, but now you claim that it’s really the professor and his lack of evaluating quality writing that is at fault.

What access to prompt engineering do you think students have that professors don't? There's literally only two demographics in this discussion to worry about.
No one is disparaging anyone. I'm sure they're great at philosophy, whoever this unnamed UWisconsin Philosophy Professor is.


They’re LAC professors that don’t have graduate programs, right? Isn’t that the literal definition of a LAC? So, they aren’t doing advanced research on the topic nor do they have graduate-level students doing work on this. Again, that’s correct, right?

The professor isn’t un-named. His name is Lawrence Shapiro.

Why can’t you admit the obvious…you don’t understand the technology yet you keep flailing.

Holy cow, you are TRYING to miss the point.
Undergraduate students, like we're talking about a fact that applies to 99.999%+, are not doing the graduate ML research you're discussing, so why at all is this being brought up?
Most students are typing their prompt into Chat GPT, typing for ideas on how to start, or asking for similes/word choice-type prompts. No one is bringing out Python3 to begin working on their Plato assignment.
Now to your point on Grad school access or whatever you're trying to get at, CS can also be researched anywhere? It's very similar to math where, if the school has a chalkboard, you have the resources you need. CS is more, do you have a good server system and a computer and...well a white board. Many LAC CS faculty are collaborators on CS research, it's not really a field that needs as many tools as say Physics or Biology. I still don't know what this has to do with students AI papers, but there you go. Lawrence Shapiro also advocates for AI usage.
Anonymous
Anonymous wrote:
Anonymous wrote:I mean, I’m paying for a music major so I don’t see why not. In fact, I think humanities majors will have much less competition now that all these parents are forbidding their students from it.

I think the question is what kind of job can your kid find with just a humanities undergrad degree.

Most grads don't come from family money. They need to find a job and pay the rent. Look how many millennials are living with their parents after graduation. It's expensive to live on your own and financially take care of yourself.


I get that, but young people aren’t widgets. My kid was born to be a musician and it’s the only thing he’s ever wanted to do. He won’t (and shouldn’t) turn into a computer science major just because I ask him to. If not for music he wouldn’t be going to college at all.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Yes. I'd encourage them to research what career they could develop from that major. I've been seeing more and more jobs seeking those with journalism and English degrees, which surprised me.

For example, technical writer, various communications professionals, digital marketers/copywriters.

B2B copywriters can earn a lot.

One of the most successful people I know from college was a theatre major earning well over $200K in communications. A personality and network will do a lot to help. This person has a way with people.


Generative AI says hi


NP. I write and edit blogs and web materials. Guess what? Companies are turning to AI and then turning right back to humans to edit AI-generated materials because those materials still sound bot-like and so often are inaccurate, too. And before you chime in with "AI will soon improve until you're not needed!" -- sure, it'll improve, but only by being "trained" by human writers and editors.


Here is an excerpt from a U Wisconsin philosophy professor...decide if you think AI will dramatically change things or not:

"ChatGPT has many of my university colleagues shaking in their Birkenstocks. This artificial-intelligence tool excels at producing grammatical and even insightful essays — just what we’re hoping to see from our undergraduates. How good is it, really? A friend asked ChatGPT to write an essay about “multiple realization.” This is an important topic in the course I teach on the philosophy of mind, having to do with the possibility that minds might be constructed in ways other than our own brains. The essay ran shorter than the assigned word count, but I would have given it an A grade. Apparently ChatGPT is good enough to create an A-level paper on a topic that’s hardly mainstream."

I think this says more about the deflated standards of modern Academia. I talk often with professors at the LAC I graduated from. They say ChatGPT can hardly write a C level paper every time they've tried it out as a department. They even have pro-AI syllabi now, because the students who use it don't do better.


That assertion can't be supported whatsoever. I doubt the LAC professors know how to train the model, nor are they hoping it produces great work...because if it does, it cuts to the core of their identity.

Disparaging other Academics really isn't a winning argument.

They invite students into those particular department meetings and get them to train it for them. You should read up on the reddit forum r/professors. A lot of state schools have dropped their academic standards for the humanities and have even pressured academics to grade students higher.


Sure…I will accept a Reddit forum as authoritative. You realize how dumb that sounds?

Maybe they could actually hire some prompt engineers to train the LLM…you do realize that LAC students probably don’t have advanced/graduate ML or linguistics skills because a LAC doesn’t have graduate programs. Also, it takes longer than a department meeting to train the LLM (although I imagine those are long, painful meetings).

I think you're just unknowledgeable and assuse that insults give you authority to speak on this, yeah. Academics love social media. The main way to communicate with other academics is Twitter (or X now or whatever Musk will call it soon). Reddit is where academics go to talk about these issues too. Did you think they conversed through stone tablets and Academia messages?

I don't think any department is prompt engineering AI. But they are testing to see what its worth with an open mind. Most of the department entered scared out of their mind that it'd produce something substantive and at the level of the students and left very comfortable with the results. ChatGPT results is


Sounds like once more you have shown complete ignorance on the topic.

You don’t understand how to train the AI and you keep giving examples of horribly 1/2-assed attempts to train it. Please, just stop and admit you don’t understand it.

It is still quite odd that you would disparage a philosophy professor who had no vested interest in chatGPT producing quality work, but now you claim that it’s really the professor and his lack of evaluating quality writing that is at fault.

What access to prompt engineering do you think students have that professors don't? There's literally only two demographics in this discussion to worry about.
No one is disparaging anyone. I'm sure they're great at philosophy, whoever this unnamed UWisconsin Philosophy Professor is.


They’re LAC professors that don’t have graduate programs, right? Isn’t that the literal definition of a LAC? So, they aren’t doing advanced research on the topic nor do they have graduate-level students doing work on this. Again, that’s correct, right?

The professor isn’t un-named. His name is Lawrence Shapiro.

Why can’t you admit the obvious…you don’t understand the technology yet you keep flailing.

Holy cow, you are TRYING to miss the point.
Undergraduate students, like we're talking about a fact that applies to 99.999%+, are not doing the graduate ML research you're discussing, so why at all is this being brought up?
Most students are typing their prompt into Chat GPT, typing for ideas on how to start, or asking for similes/word choice-type prompts. No one is bringing out Python3 to begin working on their Plato assignment.
Now to your point on Grad school access or whatever you're trying to get at, CS can also be researched anywhere? It's very similar to math where, if the school has a chalkboard, you have the resources you need. CS is more, do you have a good server system and a computer and...well a white board. Many LAC CS faculty are collaborators on CS research, it's not really a field that needs as many tools as say Physics or Biology. I still don't know what this has to do with students AI papers, but there you go. Lawrence Shapiro also advocates for AI usage.


No…holy cow you are going to great lengths to just not admit you don’t understand how to properly train chatGPT and other LLM models. If you did, you would understand that it can produce high quality work…but you don’t, so you can’t.

Your simplistic understanding of graduate level work in ML and other core areas is also embarrassing.

Please, just admit you don’t understand it…you can do it.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Yes. I'd encourage them to research what career they could develop from that major. I've been seeing more and more jobs seeking those with journalism and English degrees, which surprised me.

For example, technical writer, various communications professionals, digital marketers/copywriters.

B2B copywriters can earn a lot.

One of the most successful people I know from college was a theatre major earning well over $200K in communications. A personality and network will do a lot to help. This person has a way with people.


Generative AI says hi


NP. I write and edit blogs and web materials. Guess what? Companies are turning to AI and then turning right back to humans to edit AI-generated materials because those materials still sound bot-like and so often are inaccurate, too. And before you chime in with "AI will soon improve until you're not needed!" -- sure, it'll improve, but only by being "trained" by human writers and editors.


Here is an excerpt from a U Wisconsin philosophy professor...decide if you think AI will dramatically change things or not:

"ChatGPT has many of my university colleagues shaking in their Birkenstocks. This artificial-intelligence tool excels at producing grammatical and even insightful essays — just what we’re hoping to see from our undergraduates. How good is it, really? A friend asked ChatGPT to write an essay about “multiple realization.” This is an important topic in the course I teach on the philosophy of mind, having to do with the possibility that minds might be constructed in ways other than our own brains. The essay ran shorter than the assigned word count, but I would have given it an A grade. Apparently ChatGPT is good enough to create an A-level paper on a topic that’s hardly mainstream."

I think this says more about the deflated standards of modern Academia. I talk often with professors at the LAC I graduated from. They say ChatGPT can hardly write a C level paper every time they've tried it out as a department. They even have pro-AI syllabi now, because the students who use it don't do better.


That assertion can't be supported whatsoever. I doubt the LAC professors know how to train the model, nor are they hoping it produces great work...because if it does, it cuts to the core of their identity.

Disparaging other Academics really isn't a winning argument.

They invite students into those particular department meetings and get them to train it for them. You should read up on the reddit forum r/professors. A lot of state schools have dropped their academic standards for the humanities and have even pressured academics to grade students higher.


Sure…I will accept a Reddit forum as authoritative. You realize how dumb that sounds?

Maybe they could actually hire some prompt engineers to train the LLM…you do realize that LAC students probably don’t have advanced/graduate ML or linguistics skills because a LAC doesn’t have graduate programs. Also, it takes longer than a department meeting to train the LLM (although I imagine those are long, painful meetings).

I think you're just unknowledgeable and assuse that insults give you authority to speak on this, yeah. Academics love social media. The main way to communicate with other academics is Twitter (or X now or whatever Musk will call it soon). Reddit is where academics go to talk about these issues too. Did you think they conversed through stone tablets and Academia messages?

I don't think any department is prompt engineering AI. But they are testing to see what its worth with an open mind. Most of the department entered scared out of their mind that it'd produce something substantive and at the level of the students and left very comfortable with the results. ChatGPT results is


Sounds like once more you have shown complete ignorance on the topic.

You don’t understand how to train the AI and you keep giving examples of horribly 1/2-assed attempts to train it. Please, just stop and admit you don’t understand it.

It is still quite odd that you would disparage a philosophy professor who had no vested interest in chatGPT producing quality work, but now you claim that it’s really the professor and his lack of evaluating quality writing that is at fault.

What access to prompt engineering do you think students have that professors don't? There's literally only two demographics in this discussion to worry about.
No one is disparaging anyone. I'm sure they're great at philosophy, whoever this unnamed UWisconsin Philosophy Professor is.


They’re LAC professors that don’t have graduate programs, right? Isn’t that the literal definition of a LAC? So, they aren’t doing advanced research on the topic nor do they have graduate-level students doing work on this. Again, that’s correct, right?

The professor isn’t un-named. His name is Lawrence Shapiro.

Why can’t you admit the obvious…you don’t understand the technology yet you keep flailing.

Holy cow, you are TRYING to miss the point.
Undergraduate students, like we're talking about a fact that applies to 99.999%+, are not doing the graduate ML research you're discussing, so why at all is this being brought up?
Most students are typing their prompt into Chat GPT, typing for ideas on how to start, or asking for similes/word choice-type prompts. No one is bringing out Python3 to begin working on their Plato assignment.
Now to your point on Grad school access or whatever you're trying to get at, CS can also be researched anywhere? It's very similar to math where, if the school has a chalkboard, you have the resources you need. CS is more, do you have a good server system and a computer and...well a white board. Many LAC CS faculty are collaborators on CS research, it's not really a field that needs as many tools as say Physics or Biology. I still don't know what this has to do with students AI papers, but there you go. Lawrence Shapiro also advocates for AI usage.


No…holy cow you are going to great lengths to just not admit you don’t understand how to properly train chatGPT and other LLM models. If you did, you would understand that it can produce high quality work…but you don’t, so you can’t.

Your simplistic understanding of graduate level work in ML and other core areas is also embarrassing.

Please, just admit you don’t understand it…you can do it.

Sure the philosophy professor is training algorithms for a career. When are you going to realize that he literally opened ChatGpt, typed one of his prompts for a philosophy seminar he had coming up for Fall 2023 or whenever he wrote his Washington Post article, and was surprised by the results (clearly not enough to stir him away as he advocated FOR ai in the classrooms). You're trying to complicate the discussion to being about AI training when most people don't literally rebuild ChatGpt to use it, especially not a Philosophy professor and even more so for the undergraduate students cheating in his "Epistemology 101" course.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Yes. I'd encourage them to research what career they could develop from that major. I've been seeing more and more jobs seeking those with journalism and English degrees, which surprised me.

For example, technical writer, various communications professionals, digital marketers/copywriters.

B2B copywriters can earn a lot.

One of the most successful people I know from college was a theatre major earning well over $200K in communications. A personality and network will do a lot to help. This person has a way with people.


Generative AI says hi


NP. I write and edit blogs and web materials. Guess what? Companies are turning to AI and then turning right back to humans to edit AI-generated materials because those materials still sound bot-like and so often are inaccurate, too. And before you chime in with "AI will soon improve until you're not needed!" -- sure, it'll improve, but only by being "trained" by human writers and editors.


Here is an excerpt from a U Wisconsin philosophy professor...decide if you think AI will dramatically change things or not:

"ChatGPT has many of my university colleagues shaking in their Birkenstocks. This artificial-intelligence tool excels at producing grammatical and even insightful essays — just what we’re hoping to see from our undergraduates. How good is it, really? A friend asked ChatGPT to write an essay about “multiple realization.” This is an important topic in the course I teach on the philosophy of mind, having to do with the possibility that minds might be constructed in ways other than our own brains. The essay ran shorter than the assigned word count, but I would have given it an A grade. Apparently ChatGPT is good enough to create an A-level paper on a topic that’s hardly mainstream."

I think this says more about the deflated standards of modern Academia. I talk often with professors at the LAC I graduated from. They say ChatGPT can hardly write a C level paper every time they've tried it out as a department. They even have pro-AI syllabi now, because the students who use it don't do better.


That assertion can't be supported whatsoever. I doubt the LAC professors know how to train the model, nor are they hoping it produces great work...because if it does, it cuts to the core of their identity.

Disparaging other Academics really isn't a winning argument.

They invite students into those particular department meetings and get them to train it for them. You should read up on the reddit forum r/professors. A lot of state schools have dropped their academic standards for the humanities and have even pressured academics to grade students higher.


Sure…I will accept a Reddit forum as authoritative. You realize how dumb that sounds?

Maybe they could actually hire some prompt engineers to train the LLM…you do realize that LAC students probably don’t have advanced/graduate ML or linguistics skills because a LAC doesn’t have graduate programs. Also, it takes longer than a department meeting to train the LLM (although I imagine those are long, painful meetings).

I think you're just unknowledgeable and assuse that insults give you authority to speak on this, yeah. Academics love social media. The main way to communicate with other academics is Twitter (or X now or whatever Musk will call it soon). Reddit is where academics go to talk about these issues too. Did you think they conversed through stone tablets and Academia messages?

I don't think any department is prompt engineering AI. But they are testing to see what its worth with an open mind. Most of the department entered scared out of their mind that it'd produce something substantive and at the level of the students and left very comfortable with the results. ChatGPT results is


Sounds like once more you have shown complete ignorance on the topic.

You don’t understand how to train the AI and you keep giving examples of horribly 1/2-assed attempts to train it. Please, just stop and admit you don’t understand it.

It is still quite odd that you would disparage a philosophy professor who had no vested interest in chatGPT producing quality work, but now you claim that it’s really the professor and his lack of evaluating quality writing that is at fault.

What access to prompt engineering do you think students have that professors don't? There's literally only two demographics in this discussion to worry about.
No one is disparaging anyone. I'm sure they're great at philosophy, whoever this unnamed UWisconsin Philosophy Professor is.


They’re LAC professors that don’t have graduate programs, right? Isn’t that the literal definition of a LAC? So, they aren’t doing advanced research on the topic nor do they have graduate-level students doing work on this. Again, that’s correct, right?

The professor isn’t un-named. His name is Lawrence Shapiro.

Why can’t you admit the obvious…you don’t understand the technology yet you keep flailing.

Holy cow, you are TRYING to miss the point.
Undergraduate students, like we're talking about a fact that applies to 99.999%+, are not doing the graduate ML research you're discussing, so why at all is this being brought up?
Most students are typing their prompt into Chat GPT, typing for ideas on how to start, or asking for similes/word choice-type prompts. No one is bringing out Python3 to begin working on their Plato assignment.
Now to your point on Grad school access or whatever you're trying to get at, CS can also be researched anywhere? It's very similar to math where, if the school has a chalkboard, you have the resources you need. CS is more, do you have a good server system and a computer and...well a white board. Many LAC CS faculty are collaborators on CS research, it's not really a field that needs as many tools as say Physics or Biology. I still don't know what this has to do with students AI papers, but there you go. Lawrence Shapiro also advocates for AI usage.


No…holy cow you are going to great lengths to just not admit you don’t understand how to properly train chatGPT and other LLM models. If you did, you would understand that it can produce high quality work…but you don’t, so you can’t.

Your simplistic understanding of graduate level work in ML and other core areas is also embarrassing.

Please, just admit you don’t understand it…you can do it.

NP. This is very idiotic. I hope you don't have a degree.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Yes. I'd encourage them to research what career they could develop from that major. I've been seeing more and more jobs seeking those with journalism and English degrees, which surprised me.

For example, technical writer, various communications professionals, digital marketers/copywriters.

B2B copywriters can earn a lot.

One of the most successful people I know from college was a theatre major earning well over $200K in communications. A personality and network will do a lot to help. This person has a way with people.


Generative AI says hi


NP. I write and edit blogs and web materials. Guess what? Companies are turning to AI and then turning right back to humans to edit AI-generated materials because those materials still sound bot-like and so often are inaccurate, too. And before you chime in with "AI will soon improve until you're not needed!" -- sure, it'll improve, but only by being "trained" by human writers and editors.


Here is an excerpt from a U Wisconsin philosophy professor...decide if you think AI will dramatically change things or not:

"ChatGPT has many of my university colleagues shaking in their Birkenstocks. This artificial-intelligence tool excels at producing grammatical and even insightful essays — just what we’re hoping to see from our undergraduates. How good is it, really? A friend asked ChatGPT to write an essay about “multiple realization.” This is an important topic in the course I teach on the philosophy of mind, having to do with the possibility that minds might be constructed in ways other than our own brains. The essay ran shorter than the assigned word count, but I would have given it an A grade. Apparently ChatGPT is good enough to create an A-level paper on a topic that’s hardly mainstream."

I think this says more about the deflated standards of modern Academia. I talk often with professors at the LAC I graduated from. They say ChatGPT can hardly write a C level paper every time they've tried it out as a department. They even have pro-AI syllabi now, because the students who use it don't do better.


That assertion can't be supported whatsoever. I doubt the LAC professors know how to train the model, nor are they hoping it produces great work...because if it does, it cuts to the core of their identity.

Disparaging other Academics really isn't a winning argument.

They invite students into those particular department meetings and get them to train it for them. You should read up on the reddit forum r/professors. A lot of state schools have dropped their academic standards for the humanities and have even pressured academics to grade students higher.


Sure…I will accept a Reddit forum as authoritative. You realize how dumb that sounds?

Maybe they could actually hire some prompt engineers to train the LLM…you do realize that LAC students probably don’t have advanced/graduate ML or linguistics skills because a LAC doesn’t have graduate programs. Also, it takes longer than a department meeting to train the LLM (although I imagine those are long, painful meetings).

I think you're just unknowledgeable and assuse that insults give you authority to speak on this, yeah. Academics love social media. The main way to communicate with other academics is Twitter (or X now or whatever Musk will call it soon). Reddit is where academics go to talk about these issues too. Did you think they conversed through stone tablets and Academia messages?

I don't think any department is prompt engineering AI. But they are testing to see what its worth with an open mind. Most of the department entered scared out of their mind that it'd produce something substantive and at the level of the students and left very comfortable with the results. ChatGPT results is


Sounds like once more you have shown complete ignorance on the topic.

You don’t understand how to train the AI and you keep giving examples of horribly 1/2-assed attempts to train it. Please, just stop and admit you don’t understand it.

It is still quite odd that you would disparage a philosophy professor who had no vested interest in chatGPT producing quality work, but now you claim that it’s really the professor and his lack of evaluating quality writing that is at fault.

What access to prompt engineering do you think students have that professors don't? There's literally only two demographics in this discussion to worry about.
No one is disparaging anyone. I'm sure they're great at philosophy, whoever this unnamed UWisconsin Philosophy Professor is.


They’re LAC professors that don’t have graduate programs, right? Isn’t that the literal definition of a LAC? So, they aren’t doing advanced research on the topic nor do they have graduate-level students doing work on this. Again, that’s correct, right?

The professor isn’t un-named. His name is Lawrence Shapiro.

Why can’t you admit the obvious…you don’t understand the technology yet you keep flailing.

Holy cow, you are TRYING to miss the point.
Undergraduate students, like we're talking about a fact that applies to 99.999%+, are not doing the graduate ML research you're discussing, so why at all is this being brought up?
Most students are typing their prompt into Chat GPT, typing for ideas on how to start, or asking for similes/word choice-type prompts. No one is bringing out Python3 to begin working on their Plato assignment.
Now to your point on Grad school access or whatever you're trying to get at, CS can also be researched anywhere? It's very similar to math where, if the school has a chalkboard, you have the resources you need. CS is more, do you have a good server system and a computer and...well a white board. Many LAC CS faculty are collaborators on CS research, it's not really a field that needs as many tools as say Physics or Biology. I still don't know what this has to do with students AI papers, but there you go. Lawrence Shapiro also advocates for AI usage.


No…holy cow you are going to great lengths to just not admit you don’t understand how to properly train chatGPT and other LLM models. If you did, you would understand that it can produce high quality work…but you don’t, so you can’t.

Your simplistic understanding of graduate level work in ML and other core areas is also embarrassing.

Please, just admit you don’t understand it…you can do it.

NP. This is very idiotic. I hope you don't have a degree.


Another dips**t that doesn’t understand?
post reply Forum Index » College and University Discussion
Message Quick Reply
Go to: