ChatGPT Thinks it is not Ethical! What do you Think?

ChatGPT Thinks it is not Ethical! What do you Think?

Is ChatGPT ethical and legal?
An interview with ChatGPT about whether it is ethical!

Transcript

ChatGPT Ethics Interview
[00:00:00] David: Is what ChatGPT GPT3 and OpenAI doing, ethical and legal? Hi, I’m David Wilkinson. I’m the editor and founder of The Oxford Review, and this is a really important question, particularly if this technology is going to continue. We just need to ask ourselves where are the documents that it’s being trained on coming from?
[00:00:19] Well, quite a lot of it appears to be coming off the internet, seem to be documents that they’ve scraped from various PLA places as well as websites and things like that, that are public available. But also have copyright notices on them. So you’ll see that at the bottom of quite a lot of websites, for example, now when somebody comes onto my website the Oxford Review website, for example, and they read something and they want to link to it.
[00:00:46] They’re providing a backlink, but they’re also providing a reference because people tend to say, we got this from the Oxford Review. And likewise, if you produce a piece of content, then when somebody uses it, they should cite it and say where they got that [00:01:00] information from. And that helps you.
[00:01:01] And it helps them and it’s great. But what’s happening with Chap G P T is not that what they’re doing is they’re taking the content and then regurgitating it in different ways. Without citation and without a back link. And the question is, what is that in terms of how ethical it is and whether it’s legal, but also what impact is that going to have on the internet over time?
[00:01:26] If the search volumes start to reduce and people aren’t going to websites anymore, they’re just going to AI engines like chat, G P T. So I decided I was going to interview an expert on this and with this interview, what I’ve decided to. Interview chat, G P T and it’s answers may just surprise. So let’s go over to the interview that I held with Chap G p t and see what it reckons about how ethical and legal, what it’s doing is.
[00:01:57] So let’s ask it some questions. So [00:02:00] let’s ask it. This first question is taking someone’s ideas and writing and using them as your f. As your own a form of theft. Let’s see what it
[00:02:09] ChatGPT: reckons.
[00:02:10] Yes. Taking someone else’s ideas and presenting them as your own without proper attribution is considered plagiarism, which is a form of intellectual property theft.
[00:02:18] Plagiarism is the act of using or closely imitating the language and thoughts of another author without authorization, and the representation of that author’s work is one zone, which is unethical and violates copyright laws. It’s important to give credit to the original source of the ideas, thoughts, and expressions, and to obtain permission if necessary in order to avoid plagiarism and respect the original author’s rights.
[00:02:42] David: Okay, so let’s just ask it the next question. So it is important to state what our sources are. Question mark. Let’s see
[00:02:51] ChatGPT: what it says there. Yes, it is very important to properly cite and acknowledge your sources. This not only gives credit to the original authors and [00:03:00] shows that you have done your research, but also helps to establish credibility and demonstrate that your work is trustworthy.
[00:03:06] Citing your sources also allows readers to follow up on your work and check the accuracy of your information. There are many different citation styles, including m, mla, APA, and Chicago, each with their own specific rules and guidelines for formatting and referencing sources. It is important to use the citation style appropriate for your field or as specified by your instructor or publish.
[00:03:29] By citing your sources, you are not only avoiding plagiarism, but also contributing to the ongoing academic conversation and helping to build knowledge in your field.
[00:03:39] David: Where are you getting your data from to answer people’s
[00:03:42] ChatGPT: questions? As a language model developed by open ai, I was trained on a diverse range of texts from the internet, including. Articles, books, and more during the training process. I learned patterns and relationships in the text data, which enables me to generate responses to [00:04:00] questions.
[00:04:01] However, it’s important to note that the quality of my responses will depend on the quality and relevance of the text. Data was trained on my training data has a knowledge cutoff of 2021, so I may not have the most up-to-date information on certain topics, and my responses should not be taken as authoritative.
[00:04:18] Additionally, some information in my training data may be inaccurate, outdated, or biased, so it’s always a good idea to double check the information I provided and to consult other reliable sources as needed.
[00:04:31] David: But you are using that information without referencing the sources.
[00:04:35] ChatGPT: That is correct. As an AI language model, I do not provide specific references or citations for the information I provide in my responses.
[00:04:45] I generate responses based on patterns in the text data I was trained on, but I do not retain information on the specific source of each piece of information. It is important to remember that while I can provide helpful information and answer questions, my [00:05:00] responses should not be taken as fact without additional verification.
[00:05:03] It is always a good idea to consult multiple sources and to fact check information, especially when making important decisions or relying on the information for academic or professional purposes. So
[00:05:15] David: not citing your sources, you are using the information others are creating or generating without
[00:05:21] ChatGPT: attribution.
[00:05:22] That’s a valid point. The information used to train AI models like chat, g p t is often created by others and may include texts from a wide variety of sources. AI models like Chat, G P T can generate new texts based on the patterns in that data. It’s important to acknowledge the source of that information and to give proper credit where credit is due in this sense, not citing sources when using information generated by AI.
[00:05:48] Models like Chat, G P T could be seen as a form of using others’ work without proper attribution, which is not in line with ethical practice. It’s always important to be transparent about the sources of [00:06:00] information and to give proper credit to the creators of that information, regardless of whether it is used in a human written text or generated by an AI model.
[00:06:08] David: So isn’t that akin to
[00:06:10] ChatGPT: theft? No. Using an AI language model like chat, G P T to generate responses to questions is not the same as theft. Theft refers to the unauthorized taking of someone else’s property, while the information I provide is generated based on the text data was trained on, which was collected from publicly available sources.
[00:06:29] However, it is important to use AI language models responsibly and to consider the potential biases and limitations of the information they provide. As I mentioned earlier, some of the information in my training data may be outdated, inaccurate, or biased, so it’s important to double check the information I provide and to consult other reliable sources as needed.
[00:06:51] Additionally, it is always important to properly cite and give credit to the original sources of information when using information for academic or professional [00:07:00] purposes. AI language models like chat, G P T can be a useful tool for quickly accessing information and generating responses, but they should not be considered a substitute for thorough research, in fact checking, although
[00:07:12] David: given the former things, Which we’ve seen up here is taking someone’s idea and writing and using them as your own, a form of theft.
[00:07:19] Yes, we had a response, but you are not citing sources of your information.
[00:07:24] ChatGPT: That is correct. I do not provide specific citations or references for the information I provide in my responses. when using an AI language model like chat, g p T to generate responses, it’s important to keep in mind that the information I provide is generated based on patterns and the text data was trained on and is not guaranteed to be accurate, complete, or up to date.
[00:07:45] I’m
[00:07:45] David: just gonna cut its voice there because it’s just repeating itself in the last two paragraphs. By not citing your sources, you are using the information others are creating or generating without
[00:07:58] ChatGPT: attri. [00:08:00] Yes, that is correct. By not providing specific references or citations for the information generated by an AI language model like chat G P T, it could be perceived as using the information generated by others without proper attribution or credit.
[00:08:14] David: It also means that people are not visiting the website and are the sources. That information is stored on doesn. ,
[00:08:21] ChatGPT: yes. That is a potential outcome of using AI language models like Chat G P T to generate responses without proper attribution or citation. When information is not properly attributed and credited to its original sources, it can lead to a reduction in traffic to those sources, as well as a potential loss of revenue for the creators or owners of that inform.
[00:08:43] Additionally, proper attribution and citation provide important context for the information such as the date it was published, the author’s perspective or bias, and any updates or revisions to the information. By visiting the original sources of the information, individuals can [00:09:00] access this important context and get a more complete and accurate understanding of the inform.
[00:09:05] Therefore, it is important to use AI language models responsibly and to properly attribute and credit the original sources of the information when using information generated by these models, especially in academic and professional settings. This not only helps to establish credibility and trust in the information, but also helps to support the creators and owners of the information and ensures that individuals have access to the most complete and accurate information available.
[00:09:32] So
[00:09:32] David: if this continues and people don’t visit the sources like websites, isn’t it likely that people will stop developing website?
[00:09:41] ChatGPT: It’s possible that if the use of AI language models continues to grow and people become increasingly reliant on these models for information without properly attributing and citing the original sources, there could be a reduction in traffic to websites and other online sources.
[00:09:56] This in turn could potentially lead to a reduction in [00:10:00] investment in developing and maintaining websites and other online resources, as well as a decrease in the creation of new content. . However, it’s important to note that websites and other online sources serve many important functions beyond providing information for AI language models.
[00:10:16] For example, websites and online sources can serve as a platform for communication, commerce, and community building, and can provide valuable resources for individuals, businesses, and organizations. Moreover, websites and online sources are constantly evolving and adapting to new technologies and changing user needs, and it is likely that new and innovative users for these platforms will continue to emerge as a result.
[00:10:42] It’s important for individuals and organizations to continue to support and invest in the development and maintenance of websites and other online resources, even as AI language models become increasingly prevalent. In conclusion, while the increasing use of AI language models may have an impact on the use [00:11:00] of websites and other online sources as sources of information, it is unlikely to lead to the complete disappearance of these platforms.
[00:11:08] Instead, it is important to use AI language models in a responsible and ethical manner, including properly attributing and citing the original sources of the information in order to support the continued development and growth of websites and other online resources.
[00:11:23] David: Next question. . So if AI models like chat G P T results in a reduction of websites and publicly available information, won’t that harm the capabilities of AI models like chat G P T as there will be increasingly fewer sources of public information?
[00:11:42] ChatGPT: Yes, that’s a possibility. Quality and accuracy of AI language models like Chat, G P T is heavily dependent on the availability and quality of the training data used to develop the. . If the use of AI language models leads to a reduction in investment in websites and other online [00:12:00] sources, it could result in a decrease in the quantity and quality of publicly available information.
[00:12:06] This, in turn, could limit the capabilities of AI language models as the models would’ve less training data to work with and the information they generate would be less accurate, up to date, and diverse. Moreover, a reduction in publicly available information could result in a decrease in the number of experts and specialists who contribute to and maintain the information leading to a decline in the quality of the information over time.
[00:12:30] Therefore, it is important to use AI language models in a responsible and ethical manner, including properly attributing and citing the original sources of the information in order to support the continued development and growth of websites and other online resources. By doing so, we can ensure that the information generated by AI language models remains accurate, diverse, and up to date, and that the models themselves continue to advance and improve.
[00:12:56] David: But, and this is a very big, but you are not like a search [00:13:00] engine as you don’t link or cite your
[00:13:02] ChatGPT: sources. That’s correct. I do not provide direct links or citations to my sources. I have been trained on a large corpus of texts from the internet, but I don’t have the ability to provide links to specific pages or documents.
[00:13:15] David: So given that G B T three and chat, g p t doesn’t cite its sources of knowledge, what percentage of people do you think will take the time to try and track down those sources? ,
[00:13:27] ChatGPT: it is difficult to estimate the exact percentage of people who will take the time to try to track down the sources of information provided by AI models like G P T three and chat G P T as.
[00:13:37] This will depend on many factors such as the individual’s level of interest in the topic, their level of skepticism about the information provided and the availability of alternative sources of information. However, based on my training data, it is likely that a smaller percentage of people will take the time to track down the sources, as many may simply take the information at face value or may not feel the [00:14:00] need to verify it if they find it to be trustworthy or relevant to their needs.
[00:14:03] David: There we have it. Certainly the practice doesn’t seem to be very ethical and it looks like it’s gonna have an impact on people visiting websites. When you think about people engaged in content marketing and things like that, chat, G p T and similar things could have quite an impact and quite a negative impact.
[00:14:22] And the question’s going to be, Does that mean that people are less likely to invest the time in creating websites and doing content marketing? If all that’s gonna happen is engines like G B T three and chat G p t are gonna take their information and make sure that there’s no battling or anything like that.
[00:14:42] Now that remains to be seen. As I say, I’m not a legal expert, whether this is illegal or not, I assume he is going to be tested in the courts. But in terms of its ethics, I’ll leave that to your decision. I’ll be very interested in knowing what you think. Please put a comment in down below. But I [00:15:00] suspect that it’s gonna have quite a, quite an impact.
[00:15:03] I also suspect that what’s gonna happen with. These kinds of models that don’t attribute where they’re getting their information from, that somebody like the EU is going to legislate, or there’s going to be some form of class action or something like that, that will change things so that the models do have to at least say where that information has come from or provide some form of backlink.
[00:15:27] So interesting. Time ahead, who knows what’s going to happen. But certainly something is going to change and things are going to continue to change and probably fairly rapidly at the moment. So we’ll see whether somebody like the EU does take control of things and start to say you can’t just take information and use it as your own, even if it is in a an engine like gbt.
[00:15:52] Please do leave me a comment. Let me know what you think, what your thoughts are what you think if you’re a website owner with somebody like [00:16:00] GP Chat, G P t, using your information, regurgitating it and not giving you any form of attribution for doing that. I’m David Wilkinson. I’m the editor of the Oxford Review and hope this has been,

Be impressively well informed

Get the very latest research intelligence briefings, video research briefings, infographics and more sent direct to you as they are published

Be the most impressively well-informed and up-to-date person around...

Powered by ConvertKit
Like what you see? Help us spread the word

David Wilkinson

David Wilkinson is the Editor-in-Chief of the Oxford Review. He is also acknowledged to be one of the world's leading experts in dealing with ambiguity and uncertainty and developing emotional resilience. David teaches and conducts research at a number of universities including the University of Oxford, Medical Sciences Division, Cardiff University, Oxford Brookes University School of Business and many more. He has worked with many organisations as a consultant and executive coach including Schroders, where he coaches and runs their leadership and management programmes, Royal Mail, Aimia, Hyundai, The RAF, The Pentagon, the governments of the UK, US, Saudi, Oman and the Yemen for example. In 2010 he developed the world's first and only model and programme for developing emotional resilience across entire populations and organisations which has since become known as the Fear to Flow model which is the subject of his next book. In 2012 he drove a 1973 VW across six countries in Southern Africa whilst collecting money for charity and conducting on the ground charity work including developing emotional literature in children and orphans in Africa and a number of other activities. He is the author of The Ambiguity Advanatage: What great leaders are great at, published by Palgrave Macmillian. See more: About: About David Wikipedia: David's Wikipedia Page

  • Jonathan Reams says:

    I had wondered if you were going to take it in the direction of eventually something like chat GTP taking over content production and removing people from doing this totally, a kind of 1984 Big Brother scenario 😉

    • We aren’t removing people. Everything we do is done by humans, exactly for some of the reasons outlined in this and other videos. AI is not capable of making the decisions and judgements of a human when it comes to research. I do however have a keen interest in this and want to keep on top of the technology.

    • Hehe that could happen too… just not on this blog!

  • >