Staying in Ethics and Legal with ChatGPT usage?
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
Having a machine spit out content then presenting it as something I created is a lie.
A shovel is a tool. A screwdriver is a tool. A computer is a tool.
Content is the creator's own.So you recognize other tools, what makes this one different to you? It is a tool and requires an operator. The output of tools is owned by the operator of those tools. Even if the shovel moves the dirt you say that you moved the dirt. Even if a screwdriver turns a screw you say that you screwed in the screw.
Apply your rules universally and the answer is clear. By your own example, clearly ChatGPT is just a tool and the output is the product of the operator.
-
Let me ask in another way.... can you make a general rule that makes the things you want to include as allowed (spell checkers, word processors, printers, Grammarly and other tools that remove tasks once considered critical for education or labor) and disallows whatever ones you think shouldn't be allowed (I have no idea what you think shouldn't be allowed so I can't give an e.g. here)?
Basically, without picking on specific products, can you define what it is you think is bad? Because all of those products were considered to create some degree of the "content" in their time. To me, the content is the concept, not the words, not the paper, not the font, not the Google search. To me, you are trying to assign all the value to the mechanisms of writing, and not the ideas and subject matter.
-
@scottalanmiller said in Staying in Ethics and Legal with ChatGPT usage?:
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
In those examples the original content comes from the mind of the one hitting the keys.
To some degree, but not entirely. And the same is true of ChatGPT. You still need a competent operator to make it produce useful output. The average person can't operate it to get a good PhD thesis, for example. So it still comes from the mind of the operator of the tool.
I've come to realize that will never again trust anything from anyone without proof that they know what they have presented.
That's going to be the differentiator in my mind.
There's always the person that comes from the "School of Good Enough" and it's those folks that will try and coast through using any "tool" they can without investing the time and effort needed to actually know something.
This conversation puts an entire segment of conferences, conventions, and so much more into question. I'll never be able to look at a person as being knowledgeable without having a conversation with them to determine whether they are a ChatGPT Clone or the real deal.
That's a really sad place to be in Scott.
As I mentioned above, ownership means, "I did that". It came from me not some machine. There is an inherent sense of accomplishment there.
There is no accomplishment having a machine do it for us. None.
And this is the point that doesn't seem to be registering here. Lots of deflections and explanations.
KISS
I do it = mine.
I write it out = mine.
I use a hammer, nails, a string, a tape measure, and a hose to build a house and I did it.To have someone that has gone through life having it done for them by the ChatGPTs of this world is beyond sad with that person missing one of the most important aspects of being human: I created that.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
I've come to realize that will never again trust anything from anyone without proof that they know what they have presented.
That's going to be the differentiator in my mind.Ah, but I think this just exposes something big. Why did you feel that they knew the material before? This is why PhD students defend their thesis... anyone can produce the paper, it's explaining and defending the concepts live that get you a degree, not the paper.
If you were using the written paper as a proxy for testing someone's knowledge on something then yes, I can see why you care about the mechanism rather than the output. But I'd say, again, all that is happening is that what was already true is being exposed.
As someone who made a living for a while in high school writing essays by request, I know how common it is to not have written your own paper. I don't know why people bought essays from me, whether they used them as source material, cited them, used them to summarize research, or turned them in as their own, not my concern. I was hired to write papers on topics. I didn't even know who got them. But I know actual intelligence went into writing papers that were used by people who knew nothing of the material.
When I went to university, the top ranked uni in the US at the time that I went, it was expected, that you had copied answers from previous years. They assumed what other places called cheating as a baseline and tested only above that. If you didn't take the time to obtain and memorize previous years tests you would almost certainly fail. They didn't test only on that, they assumed it as a baseline of available knowledge.
So I see what you are saying, but what I'm saying is that the inability to trust that producing a paper that has good words on it to reflect on the knowledge of the person turning it in was already there. ChatGPT isn't changing the game there, in any way. Authors of works, even if they wrote every word themselves, rarely understand the material deeply. Writing an essay simply is not a good test of that.
So the issue, and the solution, should be pretty clear. Essay writing was not ever a great process for education (or work), we've just exposed it beyond question now. But for many of us, that happened long, long ago. Now you need to focus on discourse, which has always been the case.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
There's always the person that comes from the "School of Good Enough" and it's those folks that will try and coast through using any "tool" they can without investing the time and effort needed to actually know something.
Right. And that's how I feel people avoiding AI in writing are approaching it. They think that it is "good enough" to grade or evaluate on the unimportant, automatable portions of writing that we don't need humans for... because that part is easier to test. Spelling, sentence structure, dates, names, citations... all of that requires effort, but not thought. No creativity, no value. To care about any of it is about making things "good enough."
If I'm evaluating someone's ability to learn a subject, I want to know if they can discuss it, live. How quickly they react. How much they can deal with the unknown (counter ideas thrown at them in real time), etc. Writing facts or even producing opinions with lots of free time is easy. Defending a position in real time requires you to actually know things, not have looked them up in the past. Very different things.
This is why when interviewing people we do conversations. Anyone can answer questions potentially, even people with no idea what they are answering, but carrying on a meandering, deep conversation where ideas or bantered about and applying cross domain knowledge in real time is required tests something very different.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
This conversation puts an entire segment of conferences, conventions, and so much more into question. I'll never be able to look at a person as being knowledgeable without having a conversation with them to determine whether they are a ChatGPT Clone or the real deal.
That's a really sad place to be in Scott.This is where I don't agree. And by that I mean... that it is a sad place. I think it's the normal place and it is fine. This is the position you should always have been in. I see you coming to the same realization I've had since I was in elementary school and realized that my teachers were bluffing about topics because by second grade I was asking about things that they had never been exposed to (math, specifically.)
So yes, I agree that you need to not blindly trust that those people know much of anything. Certainly, de verdad, that is absolutely the case. But it's not new, at all. Now you know that the ways we have been taught to evaluate people all along were just proxies at best, scams at worst.
-
@scottalanmiller said in Staying in Ethics and Legal with ChatGPT usage?:
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
There's always the person that comes from the "School of Good Enough" and it's those folks that will try and coast through using any "tool" they can without investing the time and effort needed to actually know something.
Right. And that's how I feel people avoiding AI in writing are approaching it. They think that it is "good enough" to grade or evaluate on the unimportant, automatable portions of writing that we don't need humans for... because that part is easier to test. Spelling, sentence structure, dates, names, citations... all of that requires effort, but not thought. No creativity, no value. To care about any of it is about making things "good enough."
If I'm evaluating someone's ability to learn a subject, I want to know if they can discuss it, live. How quickly they react. How much they can deal with the unknown (counter ideas thrown at them in real time), etc. Writing facts or even producing opinions with lots of free time is easy. Defending a position in real time requires you to actually know things, not have looked them up in the past. Very different things.
This is why when interviewing people we do conversations. Anyone can answer questions potentially, even people with no idea what they are answering, but carrying on a meandering, deep conversation where ideas or bantered about and applying cross domain knowledge in real time is required tests something very different.
Scott, I don't feel I think. That's another disconnect that seems to happen when we have these kinds of discussions.
Logic whether Aristotelian, Boolean, or Modern is the order of the day not feelings. And, I'm not that good at it. I need to get schooled by our eldest on a regular basis on which fallacies are at work when I see something that doesn't seem to line up but can't grasp it.
Suffice it to say, I believe that I've made my premise set and conclusions very clear in a fairly logical structure.
No feelings.
Just thoughts and a conclusion.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
As I mentioned above, ownership means, "I did that". It came from me not some machine. There is an inherent sense of accomplishment there.
There is no accomplishment having a machine do it for us. None.Sort of. i get the point, but two really critical counterpoints.
-
Anyone can make an essay now. But a deeply meaningful one still requires (and long will) that a human be a creative part of the process. ChatGPT remains only a tool and you have to learn how to use it well to get good output. And you definitely have to verify that output and generally massage it. But even assuming you skip those last parts, the new skill is in managing to make the tools produce the output that you want. That's not going to be easy, humans can't do that to each other yet.
-
People feel ownership as valuable typically when they accomplish something. Manually doing automatable work, like being a factory worker, generally is mentally crushing. People don't look at their work and thing "I did this thing", they actually feel like "I wasted my time." It breaks their spirits, in causes them to doubt their self worth, it's really bad. You are absolutely right that people need to feel that value, my point is that this isn't how you can do that. But my whole happiness around tools like this is that it should free up humanity to move on from wasting time and have more free time to do things that we can really feel accomplished about instead!
Silver lining.
-
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
Scott, I don't feel I think. That's another disconnect that seems to happen when we have these kinds of discussions.
Humans feel, they don't think. Humans only think after they feel. One of the most valuable, if not THE most valuable tools, to be able to apply logic is first accepting that our thoughts start in emotions and we can only apply logic afterwards.
I highly recommend Predictably Irrational. Generally humans cannot produce rational thought reliably until they both understand and internalize that they are inherently irrational first. We are capable of rational thought, but no human gets rational thought first.
https://www.amazon.com/Predictably-Irrational-Revised-Expanded-Decisions/dp/0061353248
That humans tend to think of ourselves as being rational, thinking machines instead of emotional ones is actually an amazing argument for the value of ChatGPT and AI. Allowing computers to do logic, which they are good at, and freeing humans to be emotional, which we are good at.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
Suffice it to say, I believe that I've made my premise set and conclusions very clear in a fairly logical structure.
No feelings.
Just thoughts and a conclusion.From a logical perspective, I've not seen where you've set out your goals. Logically, that's the starting point of applying logic. I don't agree that you've set out a logical argument because without the goal to argue for, I feel you are floundering with points that don't support where you seem to be trying to go.
Like the "good enough." Clearly, to me, you seem to dislike the concept of only doing things "good enough", but you also seem, to me, to argue against excellence. So to me, it's a logical mismatch. But this depends on your end goals.
-
So while I feel I set out my points, let me clarify them.
Goals:
-
In education for the student to learn and grow higher thought, reasoning, values, and perception with a context of helping them both in their professional or otherwise income earning growth, and for the good of humanity's growth and with no concern on any mechanical, automatable or boilerplate components outside of those necessary to achieve the goal.
-
In business, this should go without saying, to produce work that is in the best interest of the business.
Ancillary Goal: To reduce the waste of time, unnecessary labor, or societal dishonesty.
When I'm attempting to apply logic to my feelings about the subject, it is to these goals that I am attempting to argue. And I believe both the things that I've say, as well as the things that you have said, both point to the use of AI and ChatGPT being the best way to support my stated goals.
I know your goals are different, but I don't know what they are. So I can't evaluate if your logic leads to accomplishing your goals. But to me, your logic does point to accomplishing mine.
-
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
No feelings.
Just thoughts and a conclusion.So I see two ways to see this.
- Answering if it is ethical or legal to use these tools - we assume this to mean is it plagiarism or violation of copyright.
In this I believe I've laid out black and white arguments based on law and definition against which no refute has been made. You have disagreed, but I am unclear that you made any logic against it. Only disagreed emotionally. I used the definitions and I'm unaware of any use of "but the definition is wrong" or "but it violated the definition as so...". So my understanding is that my conclusion, and yours, is that it is both ethical and legal to use these tools. No actual logical points against have arisen.
- Answering if it is valuable and right to use them - I assume we mean for the good of education, improving brain function, growing society, benefiting our work / employer.
In this I believe that I've stated how my goals would be approach and why your arguments, and mine, show that the use of AI supports these as I have written in my goals.
So the only conclusion I have is that your tone says you dislike the tools, but your logic says that they are good and should be used. Emotionally I understand why you would dislike them, we all hate change, it's an unavoidable human reaction, even for those of us who tend to benefit most from changes, and I do too. But logically it seems we've reached the same conclusion that they are universally good and we shouldn't just allow them to be used, we should rejoice that they are used as they seem to be universally good unless you don't want to expose corruption (like professors who were trying to avoid meaningfully teaching and evaluating.)
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
KISS
I do it = mine.
I write it out = mine.
I use a hammer, nails, a string, a tape measure, and a hose to build a house and I did it.At this point, your logic and example say that ChatGPT is good and the output is yours. It is a tool like those others.
To have someone that has gone through life having it done for them by the ChatGPTs of this world is beyond sad with that person missing one of the most important aspects of being human: I created that.
How do you see ChatGPT as different than other tools? This is where I see a leap of logic that I can't follow. To anyone who has done things "more manually", each of those tools feels like "not doing the work." To anyone living in a society that has made those tools universal, they feel like just tools to get you past the busywork that you want to avoid. It's a continuum to me.
So yes, in a KISS example, all tools are tools. All humans operating the tools are the creators.
You have to make a special case and avoid KISS to make ChatGPT fall outside of your example.
-
I wonder if maybe, and I'm just wondering, if maybe you have a goal like "Learn to write a paper well." And then add "manually", I get that. So let's assume that that is the goal.
Then this would clearly make ChatGPT a valuable research tool, even a sample generating tool, but using it to avoid "manually writing the paper" would logically violate the point.
For me, a key goal would be to "avoid anyone needing to learn to write papers." Because to me, that is a skill that exists only to placate incompetent professors to fill time to avoid learning more meaningful material. Being able to write well, something I take great pride in being good at, is essentially worthless. It has very little, if any, academic benefit and basically no function in the workplace. It's an entire human activity that exists only to waste time in the school process.
Being able to research and produce good information on a topic is super important. And if ChatGPT does that better than a human, then learning ChatGPT is the appropriate means to that end. But laying out that information, while potentially valuable forty years ago, isn't valuable today. So the last thing I would want to do is waste students' time on it instead of having them learning things that they can use in the future.
I find the time spent learning how to write, learning how to cite, how to format essays to be directly counter to educational goals. Wasting time, while expending effort, is highly detrimental because your brain works very hard doing things it is bad at (double checking citations is labor intensive and a wholly worthless activity - it's just about coping notes correctly) making it exhausted and not leaving time or energy for learning valuable things.
-
@scottalanmiller said in Staying in Ethics and Legal with ChatGPT usage?:
I wonder if maybe, and I'm just wondering, if maybe you have a goal like "Learn to write a paper well." And then add "manually", I get that. So let's assume that that is the goal.
Then this would clearly make ChatGPT a valuable research tool, even a sample generating tool, but using it to avoid "manually writing the paper" would logically violate the point.
For me, a key goal would be to "avoid anyone needing to learn to write papers." Because to me, that is a skill that exists only to placate incompetent professors to fill time to avoid learning more meaningful material. Being able to write well, something I take great pride in being good at, is essentially worthless. It has very little, if any, academic benefit and basically no function in the workplace. It's an entire human activity that exists only to waste time in the school process.
Being able to research and produce good information on a topic is super important. And if ChatGPT does that better than a human, then learning ChatGPT is the appropriate means to that end. But laying out that information, while potentially valuable forty years ago, isn't valuable today. So the last thing I would want to do is waste students' time on it instead of having them learning things that they can use in the future.
I find the time spent learning how to write, learning how to cite, how to format essays to be directly counter to educational goals. Wasting time, while expending effort, is highly detrimental because your brain works very hard doing things it is bad at (double checking citations is labor intensive and a wholly worthless activity - it's just about coping notes correctly) making it exhausted and not leaving time or energy for learning valuable things.
Scott,
That's a whole lot of words. Again, we seem to be at the point where I've made simple points and backed them up and end up with a dissertation as to why my conclusions are wrong, usually based on some sort of feelings type thing.
I will repeat myself this one last time: The human person produces something from their mind and heart. Some call it art, some call it poetry, some call it literature, and so much more. A dissertation or a thesis would be another set of examples or perhaps an essay on a Dostoyevsky novel.
That creativity comes from that person. Thus, they own what they have done.
There is no way to reconcile what comes from within a person via their own thought processes and creativity and what comes from a machine as far as ownership goes.
They are not the same and never will be.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
That's a whole lot of words. Again, we seem to be at the point where I've made simple points and backed them up and end up with a dissertation as to why my conclusions are wrong, usually based on some sort of feelings type thing.
No, you didn't read. I said I felt your conclusions were confusing because you didn't give an intended goal and that I felt your conclusions all supported by point if you shared my goals. That's why I stated what the goals are.
You'll notice I carefully noted that I, you, and all people are emotional. You act like you aren't a human and are just logical, but then talk about the value of creativity in this process. I pointed out, I feel quite logically, that humans are inherently non-logical and acknowledged that all of us, no exceptions, get our first thoughts emotionally and then attempt (hopefully) to apply logic to that initial emotion. I've said I feel many times.
I can't help but feel that you take offense at being given humanity in how I speak.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
That creativity comes from that person. Thus, they own what they have done.
There is no way to reconcile what comes from within a person via their own thought processes and creativity and what comes from a machine as far as ownership goes.
They are not the same and never will be.So again... you are saying that the TOOL owns the output?
And you are saying that word processors, spell checkers, Grammarly and other tools own the output?
If you take out the emotion and only use logic, I can't see how you can't make this make sense unless you see all tools equally. What makes you treat ChatGPT, which is not the only AI writing enhancer in the list, uniquely.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
The human person produces something from their mind and heart. Some call it art, some call it poetry, some call it literature, and so much more. A dissertation or a thesis would be another set of examples or perhaps an essay on a Dostoyevsky novel.
That creativity comes from that person. Thus, they own what they have done.For sure, and they do so using tools. And the output of those tools, whether a pencil or ChatGPT, is the creation of the operator of that tool. It is the creativity, the skill, the knowledge that it takes to operate that tool that makes the difference between good and bad output. Tools like ChatGPT (or a spell checker or a pencil) don't REMOVE creativity, they remove the non-creative portions of the work allowing MORE focus on the important parts that you describe. This is why ChatGPT is the current king of "allowing for the most creativity for humanity", removing the most (so far) of the unnecessary mechanical components that aren't creative.
Like I said, every time you give a logical argument, I see you agreeing with my assessment. But then you act like you said the opposite.
-
So critically, I ask again, given that you state that you are using logic and not emotions, logically you can only use logic if there is a goal to use the logical operators against. I stated my goals and asked if yours were the same or different.
I struggle to understand the claim of using logic in a scenario where there is no stated or implied goals. How can one claim to have logically steps towards a goal if no goal exists? This in inherently, I feel, non-logical as a process.
I don't think you can say that the interpretation is logical until you state the goal.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
That creativity comes from that person. Thus, they own what they have done.
Just to be clear, the language and the law agree and agree that the use of tools to do this changes nothing.
So we are totally in agreement on these statements. And these statements were the basis of my logic that I laid out.