Yea, academics need to just shut the publication system down. The more they keep pandering to it the more they look like fools.
As someone who’s not too familiar with the bureaucracy of academia I have to ask: Can’t the authors just upload all their studies to ResearchGate or some other website if they want? I know that they often share it privately with others when they request a paper, so can they post it publicly too?
Publishing comes with IP laws and copyright. For example, open access articles should be easy to upload without concern. “Private” articles being republished somewhere without license is “piracy”, and ResearchGate did get in trouble for it. It’s complicated. https://www.chemistryworld.com/news/publishers-settle-copyright-infringement-lawsuit-with-researchgate/4018095.article
Pre-prints are a different story.
The problems are wider than that. Besides, relying “individuals just doing the right thing and going a little further to do so” is, IMO, a trap. Fix the system instead. The little thing everyone can do is think about the system and realise it needs fixing.
deleted by creator
Nope, you just can’t get a job unless you suck it up and publish in these journals, because they’re already famous. And established profs use their cosy relationships with editors to gatekeep and stifle competition for their funding :(
When will scientists just self-publish? I mean seriously, nowadays there is nothing between a researcher and publishing their stuff on the web. Only thing would be peer-reviewing, if you want that, but then just organize it without Elsevier. Reviewers get paid jack shit so you can just do a peer-reviewing fediverse instance where only the mods know the people so it’s still double-blind.
This system is just to dangle carrots in front of young researchers chasing their PhD
Because of “impact score” the journal your work gets placed in has a huge impact on future funding. Its a very frustrating process and trying to go around it is like suicide for your lab so it has to be more of a top-down fix because the bottom up is never going to happen.
Thats why everyone uses sci hub. These publishers are terrible companies up there with EA in unpopularity.
It sounds like all it would take to destroy the predatory for-profit publication oligarchs is a majority of the top few hundred scientists, across major disciplines, rejecting it and switching to a completely decentralized peer-2-peer open-source system in protest… The publication companies seem to gate keep, and provide no value. It’s like Reddit. The site’s essentially worthless. All of the value is generated by the content creators.
Succesfully iniating this from the fediverse would be such a massive boost in public visibility and discoursive strength of the project of collectivization of information infrastructure (like lemmy).
Imagine we fluffin freed science from capital and basically all the scientists openly stated how useful this was
I can only get so erect, please stop.
Thank you, this justifies to introduce myself as campaign porn producer from now on
(What I’m trying to say is you have my bow)
So, shall we do it?
Those few top people are assholes who love the enormous power they wield over PhD students, postdocs and junior faculty, and they are usually editors on those big name journals. Unlike the people who actually do the work, they are getting paid from this system.
Ya that would be awesome and I think that movement would gain momentum really fast since most high profile labs have all had to deal with this nonsense.
That or legislation/open access rules to make these papers more accessible. One can dream.
most high profile labs have all had to deal with this nonsense.
It’s even worse for low profile labs because those publication fees eat up a greater proportion of our budget.
I know about impact factor but still this system is shit and only works because people contribute to it.
Even Nature publishes shit articles now and then. Impact score is becoming a joke more and more.
When will scientists just self-publish?
It’s commonplace in my field (nuclear physics) to share the preprint version of your article, typically on arxiv.org. You can update the article as you respond to peer reviewers too. The only difference between this and the paywalls publisher version is that version will have additional formatting edits by the journal.
If you search for articles on google scholar, it groups the preprint and published versions together so it’s easy to find the non-paywalled copy. The standard journals I publish in even sort of encourage this; you can submit the latex documents and figures by just putting the url to an arxiv manuscript.
The US Department of Energy now requires any research they fund be made publicly available. So any article I publish is also automatically posted to osti.gov 1 year after its initial publication. This version is also grouped into the google scholar search results.
It’s an imperfect system, but it’s getting much better than it was even just a decade ago.
Yeah I know about this, but personally in our field I don’t see anybody bothering with preprints sadly. Maybe we should though, sounds like the first step.
As if peer review weren’t massive fucking joke.
We should just self publish and then openly argue about it findings like the OG scientists. It didn’t stop them from discovering anything.
Bone wars electric bugaloo. In the end you really do need a way to discern who is having an appreciable impact in a field in order to know who to fund. I have yet to hear a meaningful metric for that though.
Edit: I should clarify, the other option is strictly political through an academy of sciences and has historical awfulness associated with it as well.
Editors can act as filters, which is required when dealing with an excess of information streaming in. Just like you follow celebrities on social media or you follow pseudo-forums like this one, you get a service of information filtration which increases the concentration of useful knowledge.
In the early days of modern science, the rate of publications was small, make it easier to “digest” entire fields even if there’s self-publishing. The number of published papers grows exponentially, as does the number of journals. https://www.researchgate.net/publication/333487946_Over-optimization_of_academic_publishing_metrics_Observing_Goodhart’s_Law_in_action/figures
Just like with these forums, the need for moderators (editors, reviewers) grows with the number of users who add content.
That’s where you print the downloaded PDF to a new PDF. New hash and same content, good luck tracing it back to me fucko.
You’d be safer IRL printing it on a printer without yellow ink, then scanning it, then deleting the metadata from the scan.
I saw some that add background watermarks too into random pages and locations.
Just print it to a PDF printer.
Purge metadata, convert PDF to rendered graphics (including bitmaps), add OCR layer.
There are tools for this already… but it sure would be nice to have a Firefox plugin that scrubs all metadata on downloads by default.
(Note I’m hoping this exists and someone will Um, Actually me)
deleted by creator
You could write a script to automatically watch for new files in a folder and strip metadata from every file i guess. I had done something like that for images way before.
i think this is less of a meme, and more of a scientifically dystopian fun fact, but sure.
If the paper is worth it and does have an original not OCR-ed text layer, it’d better be exported as any other format. We don’t call good things a PDF file, lol. It’s clumsy, heavy, have unadjustable font size and useless empty borders, includes various limits and takes on DRM, and it’s editing is usually done via paid software. This format shall die off.
The only reason academia needs that is strict references to exact page but it’s not that hard to emulate. Upsides to that are overwhelming.
I had my couple of times properly digitalizing PDFs into e-books and text-processing formats, and it’s a pain in the ass, but if I know it’d be read by someone but me, I’m okay with putting a bit more effort into it.
Thanks. I’ve used simplier tools (besides pirated Acrobat) and wrote some scripts to optimize deDRMing and breaking passwords on them. That one ypu posted looks promising. I’d save it to toy with it in my free time.
It’s the bees knees. Bonus theme for it: https://draculatheme.com/stirling-pdf
Well, I guess PDF has one thing going for it (which might not be relevant for scientific papers): The same file will render the same on any platform (assuming the reader implements all the PDF spec to the tee).
What format do you suggest?
FB2 is a known format for russian pirates, but it can and should be improved because it sucks ass in many things. FB3 was announced long ago but it hasn’t got any traction yet.
EPUB is mor/e popular, so it’s probably be the go to format for most books US and EU create, but it isn’t much better.
Other than that, even Doc\Docx is better than PDF, but I’d recomend RTF for it has less traces of M$ bullshit, and while it’s imperfect format, it’s still better.
Maybe for books. I’ve seen only pdf and PostScript widely used for papers in academia.
Edit: ok my supervisor liked div but he was the only one I knew with this kind of taste
Div? Can you unpack your thoughts on that, as I haven’t faced it yet?
I only know DJVU or deja vu format that’s usually used for raw scans.
Djvu is also for books and similar.
I don’t know about div format much, but I remember that mktex was producing it as a side effect
Docx doc rtf and all those have a different purpose than pdf, word docs don’t even necessarily look the same on two different computers with the same version of word, and rtf doesn’t even attempt any kind of paper description, it’s literally only a rich format for text. None of these are a true “if I give this to someone to print I know what I will get” “portable document format”
I will look at fb*, I had not heard of them. Thanks!
Most papers are made in TEX or LaTEX. These formats separate display from data in such a way that they can be quickly formatted to a variety of page size, margins, text size, et al with minimal effort. It’s basically an open standard typesetting format. You can create and edit TEX in any text editor and run it through a program to prepare it for print or viewing. Nothing else can handle math formulas, tables, charts, etc with the same elegance. If you’ve ever struggled to write a math paper in Microsoft word, seriously question why your professor hasn’t already forced you to learn about LaTEX.
Can’t we all researcher who is technically good at web servers start a opensource alternative to these paid services. I get that we need to publish to a renowned publisher, but we also decide together to publish to an alternative opensource option. This way the alternate opensource option also grows.
Some time last year I learned of an example of such a project (peerreview on GitHub):
The goal of this project was to create an open access “Peer Review” platform:
Peer Review is an open access, reputation based scientific publishing system that has the potential to replace the journal system with a single, community run website. It is free to publish, free to access, and the plan is to support it with donations and (eventually, hopefully) institutional support.
It allows academic authors to submit a draft of a paper for review by peers in their field, and then to publish it for public consumption once they are ready. It allows their peers to exercise post-publish quality control of papers by voting them up or down and posting public responses.
I just looked it up now to see how it is going… And I am a bit saddened to find out that the developer decided to stop. The author has a blog in which he wrote about the project and about why he is not so optimistic about the prospects of crowd sourced peer review anymore: https://www.theroadgoeson.com/crowdsourcing-peer-review-probably-wont-work , and related posts referenced therein.
It is only one opinion, but at least it is the opinion of someone who has thought about this some time and made a real effort towards the goal, so maybe you find some value from his perspective.
Personally, I am still optimistic about this being possible. But that’s easy for me to say as I have not invested the effort!
I do like the intermediaries that have popped up, like PubPeer. I highly recommend that everyone get the extension as it adds context to many different articles.
That’s really cool, I will use it
It’s been surprisingly helpful, it even flags linked pages, like on Wikipedia.
This kind of thing needs to be started by universities and/or research institutes. Not the code part, but the organising the first journals part. It’s going to get nowhere without establishment buy-in.
Challenge is how to jump start a platform where the researchers come to
If we build a decentralized system for paper publishing, like lemmy based on activitypub… will it work?
is there hassle free software that simutates low quality printing and rescanning with text recognition?
Print to PDF might just convert the PDF into Postscript instructions and back again without the original PDF’s metadata, but that probably depends on the Print to PDF software being used and its settings.
deleted by creator