Here's a taste:
Emily Bell, director of the Tow Center for Digital Journalism at Columbia University
The core of what plagiarism is remains undented by the digital publishing environment. Copying out the words of others and passing them off as your own is still what it always was; wholesale plagiarism is a sacking offense in most newsrooms. It is of course much easier to detect now, thanks to Google text search, but beyond the clear example of screeds of lifted text or images passed off as your own, the issue of who is a plagiarist is also a little more porous at the edges than it was.
In digital journalism, one of the most valuable functions you can perform is to aggregate and link to the content produced by others. We do however also see the problems of “over aggregation,” where credit and sourcing is not clear enough, links are missing, attribution is fuzzy and where the idea of “fair use” is enormously stretched. Is this plagiarism or enthusiastic aggregation?
The increased ease of detection of plagiarism is offset against the temptation to “over aggregate.” As for the broader context of taking ideas and presenting them as new, well, that happens all the time, sometimes knowingly and sometimes accidentally. It is an area where journalism is still thrashing out standards and best practice; there is a sort of arms race of transparency going on in digital news filtering at the moment – who did what first and when. I can’t help feeling that the idea of a plagiarism algorithm is not too far away.
No comments:
Post a Comment