See more posts like this on Tumblr
#web design #academe #academic communication #academic samizdat #bloggingMore you might like
Thoughts on “Some academics remain skeptical of Academia.edu” | University Affairs
This morning I ran across a tweet from colleague Andrew Eckford:
I find academic social networking sites obnoxious, but I don’t get the fear and loathing. via @EmmMacfarlane https://t.co/jwYJxJsZJt
— Andrew Eckford (@andreweckford) April 12, 2016
His response was probably innocuous enough, but I thought the article should be put to task a bit more.
“35 million academics, independent scholars and graduate students as users, who collectively have uploaded some eight million texts”
35 million users is an okay number, but their engagement must be spectacularly bad if only 8 million texts are available. How many researchers do you know who’ve published only a quarter of an article anywhere, much less gotten tenure?
“the platform essentially bans access for academics who, for whatever reason, don’t have an Academia.edu account. It also shuts out non-academics.”
They must have changed this, as pretty much anyone with an email address (including non-academics) can create a free account and use the system. I’m fairly certain that the platform was always open to the public from the start, but the article doesn’t seem to question the statement at all. If we want to argue about shutting out non-academics or even academics in poorer countries, let’s instead take a look at “big publishing” and their $30+/paper paywalls and publishing models, shall we?
“I don’t trust academia.edu”
Given his following discussion, I can only imagine what he thinks of big publishers in academia and that debate.
“McGill’s Dr. Sterne calls it “the gamification of research,”
Most research is too expensive to really gamify in such a simple manner. Many researchers are publishing to either get or keep their jobs and don’t have much time, information, or knowledge to try to game their reach in these ways. And if anything, the institutionalization of “publish or perish” has already accomplished far more “gamification”, Academia.edu is just helping to increase the reach of the publication. Given that research shows that most published research isn’t even read, much less cited, how bad can Academia.edu really be? [Cross reference: Reframing What Academic Freedom Means in the Digital Age]
If we look at Twitter and the blogging world as an analogy with Academia.edu and researchers, Twitter had a huge ramp up starting in 2008 and helped bloggers obtain eyeballs/readers, but where is it now? Twitter, even with a reasonable business plan is stagnant with growing grumblings that it may be failing. I suspect that without significant changes that Academia.edu (which is a much smaller niche audience than Twitter) will also eventually fall by the wayside.
The article rails against not knowing what the business model is or what’s happening with the data. I suspect that the platform itself doesn’t have a very solid business plan and they don’t know what to do with the data themselves except tout the numbers. I’d suspect they’re trying to build “critical mass” so that they can cash out by selling to one of the big publishers like Elsevier, who might actually be able to use such data. But this presupposes that they’re generating enough data; my guess is that they’re not. And on that subject, from a journalistic viewpoint, where’s the comparison to the rest of the competition including ResearchGate.net or Mendeley.com, which in fact was purchased by Elsevier? As it stands, this simply looks like a “hit piece” on Academia.edu, and sadly not a very well researched or reasoned one.
In sum, the article sounds to me like a bunch of Luddites running around yelling “fire”, particularly when I’d imagine that most referred to in the piece feed into the more corporate side of publishing in major journals rather than publishing it themselves on their own websites. I’d further suspect they’re probably not even practicing academic samizdat. It feels to me like the author and some of those quoted aren’t actively participating in the social media space to be able to comment on it intelligently. If the paper wants to pick at the academy in this manner, why don’t they write an exposé on the fact that most academics still have websites that look like they’re from 1995 (if, in fact, they have anything beyond their University’s mandated business card placeholder) when there are a wealth of free and simple tools they could use? Let’s at least build a cart before we start whipping the horse.
For academics who really want to spend some time and thought on a potential solution to all of this, I’ll suggest that they start out by owning their own domain and own their own data and work. The #IndieWeb movement certainly has an interesting philosophy that’s a great start in fixing the problem; it can be found at http://www.indiewebcamp.com.
Some Thoughts on Academic Publishing and “Who’s downloading pirated papers? Everyone” from Science | AAAS
Sci Hub has been in the news quite a bit over the past half a year and the bookmarked article here gives some interesting statistics. I’ll preface some of the following editorial critique with the fact that I love John Bohannon’s work; I’m glad he’s spent the time to do the research he has. Most of the rest of the critique is aimed at the publishing industry itself.
From a journalistic standpoint, I find it disingenuous that the article didn’t actually hyperlink to Sci Hub. Neither did it link out (or provide a full quote) to Alicia Wise’s Twitter post(s) nor link to her rebuttal list of 20 ways to access their content freely or inexpensively. Of course both of these are editorial related, and perhaps the rebuttal was so flimsy as to be unworthy of a link from such an esteemed publication anyway.
Sadly, Elsevier’s list of 20 ways of free/inexpensive access doesn’t really provide any simple coverage for graduate students or researchers in poorer countries which are the likeliest group of people using Sci Hub, unless they’re going to fraudulently claim they’re part of a class which they’re not, and is this morally any better than the original theft method? It’s almost assuredly never used by patients, which seem to be covered under one of the options, as the option to do so is painfully undiscoverable past their typical $30/paper firewalls. Their patchwork hodgepodge of free access is so difficult to not only discern, but one must keep in mind that this is just one of dozens of publishers a researcher must navigate to find the one thing they’re looking for right now (not to mention the thousands of times they need to do this throughout a year, much less a career).
Consider this experiment, which could be a good follow up to the article: is it easier to find and download a paper by title/author/DOI via Sci Hub (a minute) versus through any of the other publishers’ platforms with a university subscription (several minutes) or without a subscription (an hour or more to days)? Just consider the time it would take to dig up every one of 30 references in an average journal article: maybe just a half an hour via Sci Hub versus the days and/or weeks it would take to jump through the multiple hoops to first discover, read about, and then gain access and then download them from the over 14 providers (and this presumes the others provide some type of “access” like Elsevier).
Those who lived through the Napster revolution in music will realize that the dead simplicity of their system is primarily what helped kill the music business compared to the ecosystem that exists now with easy access through the multiple streaming sites (Spotify, Pandora, etc.) or inexpensive paid options like (iTunes). If the publishing business doesn’t want to get completely killed, they’re going to need to create the iTunes of academia. I suspect they’ll have internal bean-counters watching the percentage of the total (now apparently 5%) and will probably only do something before it passes a much larger threshold, though I imagine that they’re really hoping that the number stays stable which signals that they’re not really concerned. They’re far more likely to continue to maintain their status quo practices.
Some of this ease-of-access argument is truly borne out by the statistics of open access papers which are downloaded by Sci Hub–it’s simply easier to both find and download them that way compared to traditional methods; there’s one simple pathway for both discovery and download. Surely the publishers, without colluding, could come up with a standardized method or protocol for finding and accessing their material cheaply and easily?
“Hart-Davidson obtained more than 100 years of biology papers the hard way—legally with the help of the publishers. ‘It took an entire year just to get permission,’ says Thomas Padilla, the MSU librarian who did the negotiating.” John Bohannon in Who’s downloading pirated papers? Everyone
Personally, I use use relatively advanced tools like LibX, which happens to be offered by my institution and which I feel isn’t very well known, and it still takes me longer to find and download a paper than it would via Sci Hub. God forbid if some enterprising hacker were to create a LibX community version for Sci Hub. Come to think of it, why haven’t any of the dozens of publishers built and supported simple tools like LibX which make their content easy to access? If we consider the analogy of academic papers to the introduction of machine guns in World War I, why should modern researchers still be using single-load rifles against an enemy that has access to nuclear weaponry?
My last thought here comes on the heels of the two tweets from Alicia Wise mentioned, but not shown in the article:
I’m all for universal access, but not theft! There are lots of legal ways to get access https://t.co/iDZW2XcPhy ½ .@mbeisen .@Sci_Hub
— Alicia Wise (@wisealic) March 14, 2016
A digital sub to the NYT $260/person and for all #Elsevier content $215/researcher. Both fantastic value! 2/2 .@mbeisen .@Scihub @nytimes
— Alicia Wise (@wisealic) March 14, 2016
She mentions that the New York Times charges more than Elsevier does for a full subscription. This is tremendously disingenuous as Elsevier is but one of dozens of publishers for which one would have to subscribe to have access to the full panoply of material researchers are typically looking for. Further, Elsevier nor their competitors are making their material as easy to find and access as the New York Times does. Neither do they discount access to the point that they attempt to find the subscription point that their users find financially acceptable. Case in point: while I often read the New York Times, I rarely go over their monthly limit of articles to need any type of paid subscription. Solely because they made me an interesting offer to subscribe for 8 weeks for 99 cents, I took them up on it and renewed that deal for another subsequent 8 weeks. Not finding it worth the full $35/month price point I attempted to cancel. I had to cancel the subscription via phone, but why? The NYT customer rep made me no less than 5 different offers at ever decreasing price points–including the 99 cents for 8 weeks which I had been getting!!–to try to keep my subscription. Elsevier, nor any of their competitors has ever tried (much less so hard) to earn my business. (I’ll further posit that it’s because it’s easier to fleece at the institutional level with bulk negotiation, a model not too dissimilar to the textbook business pressuring professors on textbook adoption rather than trying to sell directly the end consumer–the student, which I’ve written about before.)
(Trigger alert: Apophasis to come) And none of this is to mention the quality control that is (or isn’t) put into the journals or papers themselves. Fortunately one need’t even go further than Bohannon’s other writings like Who’s Afraid of Peer Review? Then there are the hordes of articles on poor research design and misuse of statistical analysis and inability to repeat experiments. Not to give them any ideas, but lately it seems like Elsevier buying the Enquirer and charging $30 per article might not be a bad business decision. Maybe they just don’t want to play second-banana to TMZ?
Interestingly there’s a survey at the end of the article which indicates some additional sources of academic copyright infringement. I do have to wonder how the data for the survey will be used? There’s always the possibility that logged in users will be indicating they’re circumventing copyright and opening themselves up to litigation.
I also found the concept of using the massive data store as a means of applied corpus linguistics for science an entertaining proposition. This type of research could mean great things for science communication in general. I have heard of people attempting to do such meta-analysis to guide the purchase of potential intellectual property for patent trolling as well.
Finally, for those who haven’t done it (ever or recently), I’ll recommend that it’s certainly well worth their time and energy to attend one or more of the many 30-60 minute sessions most academic libraries offer at the beginning of their academic terms to train library users on research tools and methods. You’ll save yourself a huge amount of time.
The Mathematics Literature Project
The Mathematics Literature Project
The Mathematics Literature Project intends to survey the state of the freely accessible mathematics literature. In particular, it will index freely accessibleURLs for mathematics articles. These are legitimately hosted copies of the article (i.e. at publishers, the arXiv, institutional repositories, or authors’ homepages), which are freely available in any…
The bookmarking service CiteULike is shutting down on March 30, 2019 after a 15 year run. While some may turn to yet-another-silo or walled garden I highly recommend going IndieWeb and owning all of your own bookmarks on your own website.
I’ve been doing this for several years now and it gives me a lot more control over how much meta data I can add, change, or modify as I see fit. Let me know if I can help you do something similar.
🎙 The IndieWeb and Academic Research and Publishing
Running time: 0h 12m 59s | Download (13.9 MB) | Subscribe by RSS | Huffduff
Overview Workflow
Posting
Researcher posts research work to their own website (as bookmarks, reads, likes, favorites, annotations, etc.), they can post their data for others to review, they can post their ultimate publication to their own website.
Discovery/Subscription methods
The researcher’s post can webmention an aggregating website similar to the way they would pre-print their research on a server like arXiv.org. The aggregating website can then parse the original and display the title, author(s), publication date, revision date(s), abstract, and even the full paper itself. This aggregator can act as a subscription hub (with WebSub technology) to which other researchers can use to find, discover, and read the original research.
Peer-review
Readers of the original research can then write about, highlight, annotate, and even reply to it on their own websites to effectuate peer-review which then gets sent to the original by way of Webmention technology as well. The work of the peer-reviewers stands in the public as potential work which could be used for possible evaluation for promotion and tenure.
Feedback mechanisms
Readers of original research can post metadata relating to it on their own website including bookmarks, reads, likes, replies, annotations, etc. and send webmentions not only to the original but to the aggregation sites which could aggregate these responses which could also be given point values based on interaction/engagement levels (i.e. bookmarking something as “want to read” is 1 point where as indicating one has read something is 2 points, or that one has replied to something is 4 points and other publications which officially cite it provide 5 points. Such a scoring system could be used to provide a better citation measure of the overall value of of a research article in a networked world. In general, Webmention could be used to provide a two way audit-able trail for citations in general and the citation trail can be used in combination with something like the Vouch protocol to prevent gaming the system with spam.
Archiving
Government institutions (like Library of Congress), universities, academic institutions, libraries, and non-profits (like the Internet Archive) can also create and maintain an archival copy of digital and/or printed copies of research for future generations. This would be necessary to guard against the death of researchers and their sites disappearing from the internet so as to provide better longevity.
Show notes
Resources mentioned in the microcast
IndieWeb for Education
IndieWeb for Journalism
Academic samizdat
arXiv.org (an example pre-print server)
Webmention
A Domain of One’s Own
Article on A List Apart: Webmentions: Enabling Better Communication on the Internet
Synidicating to Discovery sites
Examples of similar currently operating sites:
IndieNews (sorts posts by language)
IndieWeb.xyz (sorts posts by category or tag)
Some thoughts on highlights and marginalia with examples
Earlier today I created a read post with some highlights and marginalia related to a post by Ian O’Bryne. In addition to posting it and the data for my own purposes, I’m also did it as a manual test of sorts, particularly since it seemed apropos in reply to Ian’s particular post. I thought I’d take a stab at continuing to refine my work at owning and controlling my own highlights, notes, and annotations on the web. I suspect that being able to better support this will also help to bring more self-publishing and its benefits to the halls of academe.
At present I’m relying on a PESOS solution to post on another site and syndicate a copy back to my own site. I’ve used Hypothesis, in large part for their fantastic UI and as well for the data transfer portion (via RSS and even API options), to own the highlights and marginalia I’ve made on the original on Ian’s site. Since he’s syndicated a copy of his original to Medium.com, I suppose I could syndicate copies of my data there as well, but I’m saving myself the additional manual pain for the moment.
Rather than send a dozen+ webmentions to Ian, I’ve bundling everything up in one post. He’ll receive it and it would default to display as a read post though I suspect he may switch it to a reply post for display on his own site. For his own use case, as inferred from his discussion about self-publishing and peer-review within the academy, it might be more useful for him to have received the dozen webmentions. I’m half tempted to have done all the annotations as stand alone posts (much the way they were done within Hypothesis as I read) and use some sort of custom microformats mark up for the highlights and annotations (something along the lines of u-highlight-of
and u-annotation-of
). At present however, I’ve got some UI concerns about doing so.
One problem is that, on my site, I’d be adding 14 different individual posts, which are all related to one particular piece of external content. Some would be standard replies while others would be highlights and the remainder annotations. Unless there’s some particular reason to do so, compiling them into one post on my site seems to be the most logical thing to do from my perspective and that of my potential readers. I’ll note that I would distinguish annotations as being similar to comments/replies, but semantically they’re meant more for my sake than for the receiving site’s sake. It might be beneficial for the receiving site to accept and display them (preferably in-line) though I could see sites defaulting to considering them vanilla mentions as a fallback. Perhaps there’s a better way of marking everything up so that my site can bundle the related details into a single post, but still allow the receiving site to log the 14 different reactions and display them appropriately? One needs to not only think about how one’s own site looks, but potentially how others might like to receive the data to display it appropriately on their sites if they’d like as well. As an example, I hope Ian edits out my annotations of his typos if he chooses to display my read post as a comment.
One might take some clues from Hypothesis which has multiple views for their highlights and marginalia. They have a standalone view for each individual highlight/annotation with its own tag structure. They’ve also got views that target highlights/annotation in situ. While looking at an original document, one can easily scroll up and down through the entire page’s highlights and annotations. One piece of functionality I do wish they would make easier is to filter out a view of just my annotations on the particular page (and give it a URL), or provide an easier way to conglomerate just my annotations. To accomplish a bit of this I’ll typically create a custom tag for a particular page so that I can use Hypothesis’ search functionality to display them all on one page with a single URL. Sadly this isn’t perfect because it could be gamed from the outside–something which might be done in a classroom setting using open annotations rather than having a particular group for annotating. I’ll also note in passing that Hypothesis provides RSS and Atom feeds in a variety of ways so that one could quickly utilize services like IFTTT.com or Zapier to save all of their personal highlights and annotations to their website. I suspect I’ll get around to documenting this in the near future for those interested in the specifics.
Another reservation is that there currently isn’t yet a simple or standard way of marking up highlights or marginalia, much less displaying them specifically within the WordPress ecosystem. As I don’t believe Ian’s site is currently as fragmentions friendly as mine, I’m using links on the date/time stamp for each highlight/annotation which uses Hypothesis’ internal functionality to open a copy of the annotated page and automatically scroll down to the fragment as mentioned before. I could potentially see people choosing to either facepile highlights and/or marginalia, wanting to display them in-line within their text, or possibly display them as standalone comments in their comments section. I could also see people wanting to be able to choose between these options based on the particular portions or potentially senders. Some of my own notes are really set up as replies, but the CSS I’m using physically adds the word “Annotation”–I’ll have to remedy this in a future version.
The other benefit of these date/time stamped Hypothesis links is that I can mark them up with the microformat u-syndication
class for the future as well. Perhaps someone might implement backfeed of comments until and unless Hypothesis implements webmentions? For fun, some of my annotations on Hypothesis also have links back to my copy as well. In any case, there are links on both copies pointing at each other, so one can switch from one to the other.
I could imagine a world in which it would be nice if I could use a service like Hypothesis as a micropub client and compose my highlights/marginalia there and micropub it to my own site, which then in turn sends webmentions to the marked up site. This could be a potential godsend to researchers/academics using Hypothesis to aggregate their research into their own personal online (potentially open) notebooks. In addition to adding bookmark functionality, I could see some of these be killer features in the Omnibear browser extension, Quill, or similar micropub clients.
I could also see a use-case for adding highlight and annotation kinds to the Post Kinds plugin for accomplishing some of this. In particular it would be nice to have a quick and easy user interface for creating these types of content (especially via bookmarklet), though again this path also relies on doing individual posts instead of a single post or potentially a collection of posts. A side benefit would be in having individual tags for each highlight or marginal note, which is something Hypothesis provides. Of course let’s not forget the quote post kind already exists, though I’ll have to think through the implications of that versus a slightly different semantic version of the two, at least in the ways I would potentially use them. I’ll note that some blogs (Colin Walker and Eddie Hinkle come to mind) have a front page that display today’s posts (or the n-most recent); perhaps I could leverage this to create a collection post of highlights and marginalia (keyed off of the original URL) to make collection posts that fit into my various streams of content. I’m also aware of a series plugin that David Shanske is using which aggregates content like this, though I’m not quite sure this is the right solution for the problem.
Eventually with some additional manual experimentation and though in doing this, I’ll get around to adding some pieces and additional functionality to the site. I’m still also interested in adding in some of the receipt/display functionalities I’ve seen from Kartik Prabhu which are also related to some of this discussion.
Is anyone else contemplating this sort of use case? I’m curious what your thoughts are. What other UI examples exist in the space? How would you like these kinds of reactions to look on your site?
Editor’s note:This post was originally published on BoffoSocko.com
Webmentions: Enabling Better Communication on the Internet
Over 1 million Webmentions can’t be wrong. Join the next revolution in web communication. Add the Webmentions standard to your website to solve the biggest communications problem on today’s internet and add rich context to your content.
https://alistapart.com/article/webmentions-enabling-better-communication-on-the-internet
My post on A List Apart is up!
Brief Review: The Rule of Four by Ian Caldwell and Dustin Thomason
The Rule of Four by Ian Caldwell and Dustin Thomason My rating: 4 of 5 stars A nice little thriller about an obscure text from the Renaissance (quattrocento) set in modern times. This falls into the genre of historical fiction that’s similar to Dan Brown‘s Robert Langdon series or films like the Nicolas Cage National Treasure series, though not quite as “rompish.” I have to imagine that those who…
Reply to John Scalzi on “How Blogs Work Today”
Does blogging need to be different than it was?
I
agree with John that blogs seemingly occupy a different space in online life today than they did a decade ago, but I won’t concede that, for me at least, most of it has moved to the social media silos.
I think the role of the blog is different than it was even just a couple of years ago. It’s not the sole outpost of an online life, although it can be an anchor, holding it in place. — John Scalzi
Why? About two years ago I began delving into the evolving movement known as IndieWeb, which has re-empowered me to take back my web presence and use my own blog/website as my primary online hub and identity. The tools I’ve found there allow me to not only post everything to my own site first and then syndicate it out to the social circles and sites I feel it might resonate with, but best of all, the majority of the activity (comments, likes, shares, etc.) on those sites boomerangs back to the comments on my own site! This gives me a better grasp on where others are interacting with my content, and I can interact along with them on the platforms that they choose to use.
Some of the benefit is certainly a data ownership question — for who is left holding the bag if a major site like Twitter or Facebook is bought out or shut down? This has happened to me in dozens of cases over the past decade where I’ve put lots of content and thought into a site only to see it shuttered and have all of my data and community disappear with it.
Other benefits include: cutting down on notification clutter, more enriching interactions, and less time wasted scrolling through social sites.
Reply from my own site
Now I’m able to use my own site to write a comment on John’s post (where the comments are currently technically closed), and keep it for myself, even if his blog should go down one day. I can alternately ping his presence on other social media (say, by means of Twitter) so he’ll be aware of the continued conversational ripples he’s caused.
Social media has become ubiquitous in large part because those corporate sites are dead simple for Harry and Mary Beercan to use. Even my own mother’s primary online presence begins with http://facebook.com/. But not so for me. I’ve taken the reigns of my online life back.
My Own Hub
My blog remains my primary online hub, and some very simple IndieWeb tools enable it by bringing all the conversation back to me. I joined Facebook over a decade ago, and you’ll notice by the date on the photo that it didn’t take me long to complain about the growing and overwhelming social media problem I had.
I’m glad I can finally be at the center of my own social graph, and it was everything I thought it could be.
Editor’s note:This post was originally published on BoffoSocko.com
Image courtesy of Industrial Artifacts, who are selling this cabinet for $3,250.00.