Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At the foundation of academic writing, there is a given: that you and the authors you are citing have made an effort to present factual arguments with citations when applicable.

ChatGPT doesn’t do that. And so I think it should almost never be cited.

I think of chatGPT as more of a writing coach. You tell it what you want to write about, and it puts together a sample response that, above all else, sounds convincing. If it happens to know relevant facts it will put them in (but not with citations). If chat GPT doesn’t know relevant facts, it uses made up facts as a placeholder, just to give an example of how a convincing response could be structured.

It’s then our job as an actual writer to take this sample, hunt down the relevant facts (with citations), and put forward our final argument.

Perhaps the only time it makes sense to me to cite chatGPT is if you are explicitly writing about it as an entity.



> ChatGPT doesn’t do that. And so I think it should almost never be cited.

I usually try to get ChatGPT to give me a source for its answer. At least for me in most cases it gives non working URLs. It's very frustrating. Or it was very frustrating, I've since stopped using it where I need to quote sources.

ChatGPT is not a useful source since its inability to tell you where it got the information from is no different than asking a family member and citing "Bob, my uncle"


If you have ChatGPT Plus, you get access to the Web Browsing model, which allows ChatGPT access to the Internet. Not only do you get an audit log for all the sites ChatGPT visit, you also get citations as well for any sources it does end up using in the generated text. Even when ChatGPT does fail, its audit log gives me links to several sources that I can consult instead.


How is that different from a normal google search?


The difference is that you aren’t the one coming up with the search terms, browsing through all the search results, and then selecting the specific results you want to explore further. You let the Web Browsing model do all that work for you.


In the case of ChatGPT, the citations serve a different purpose. ChatGPT should be cited as a warning that the text needs additional investigation.


Or as a helpful filter to reduce your reading load.


If you're not double-checking what chatgpt says about its sources, you might look like an utter fool. https://www.businessinsider.com/lawyer-duped-chatgpt-invente...


Well, in his defense, he asked ChatGPT, "Am I an utter fool for using you?" and it said "no"...


Yeah... for some reason the judge wasn't fooled.


I’ve used ChatGPT to generate value scenarios, possible use cases, and descriptions of use for internal white papers. If another colleague were helping with this, I’d credit them in the acknowledgements at minimum. If I had gotten a list from a published paper, I’d credit it — even if I changed it considerably.

I do this for three reasons: (1) giving credit; (2) letting readers find more details so they can better interrogate the paper; and (3) proving transparency of sources generally.

With that in mind, it seems I should credit ChatGPT, especially to meet the second or third reasons. And that crediting should include my prompts and, in an extended session, multiple prompts.

The hard part is the first reason credit. The way it works today, I can’t give credit to the pieces that help build the response I’m using in a very precise way. I’d love to see a better way of doing that.


For research uses, we built an Assistant on top of ChatGPT that uses our database of articles to ground its answers and give real references, might be useful for you or others on research papers at least: https://scite.ai/assistant


I have a very different experience with GPT-4. It always provides about 6-10 references for every response, and they work perfectly every single time. I realize the hallucination problem and check everything every time, but compared to 3.5 (which often got titles a bit wrong or mixed up some authors, etc), it's astoundingly good at providing reasonable and factual information most of the time when it gives references that work pretty much every time. Definitely always check though. It's great for getting over staring at a blank page and getting writing on something where you probably already know the references that will be called anyway, such that you can edit out any subtle over reaches that are made.


In that case, there's no reason to cite ChatGPT at all, just as you wouldn't cite Google. Cite the actual references, which you manually checked.


I think if you significantly benefitted from ChatGPT in preparing the paper, then you should still cite it, just like you would cite a software package (e.g. scipy), even if in principle you could have done it by hand.


I didn't know that was standard, but where is the line? You would never cite e.g. Microsoft Word, right? Coincidently, that will soon have ChatGPT built-in.


APA for example says the following: "... a reference is not necessary for standard software... Examples are Microsoft Word, Java, and Adobe Photoshop."

https://blog.apastyle.org/apastyle/2015/01/how-to-cite-softw...


> Perhaps the only time it makes sense to me to cite chatGPT is if you are explicitly writing about it as an entity.

That specific situation is what TFA is discussing. It specifically says to discuss ChatGPT inside the paper itself, rather than confining that to the citation.


Text generated by ChatGPT that you have reproduced should be cited, if for no other reason than the fact that you are not the author. Taking credit for work that is not your own is unethical.


> Taking credit for work that is not your own is unethical.

Almost hilarious that ChatGPT doesn't give citations or give credit to anything it writes, despite its entire knowledge being derived from us.


Ghost writers are not always credited in books. Is it because they receive money in exchange?


Generally speaking, plagiarism (as in, passing off other's words and ideas as your own) is not illegal except in cases where it intersects with copyright law. In the case of ghost writers, they transfer copyright to the publisher, so legally speaking there's no issue.

In academic writing, there's a standard that the authors are the sole authors of the text that is upheld by academic institutions, academic communities, and the academic publishing industry (but not the legal system itself). This same standard doesn't exist in the same way in non-academic publishing (or a lot of other media) so not crediting ghost writers is considered acceptable.


General books are not the same as academic papers.

(I am aware there are academic books)


I would assume that most books written by CEOs, professional athletes, general celebrities, and the like are significantly cowritten with someone whether they were entirely ghostwritten or not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: