Web 3.0, Scraping, and IFrames
Being an online media professional is very much like being a sociologist or a psychiatrist. None of us really have any clue how anything in our field works or how it ought to work, so we spend much of our … Read More
Being an online media professional is very much like being a sociologist or a psychiatrist. None of us really have any clue how anything in our field works or how it ought to work, so we spend much of our time making shit up and hoping that it sounds awesome. This is what we call "theory." And for every Lacanian psychoanalyst or critical theorist, there is some digital swami blathering about "increased layers of meaning" or "intertwingled longtails" or some such ginned-up piffle.
The paradigm-smashing theoretical framework of the moment is "Web 3.0." Theorists of Web 3.0 manage to use the language and tone of Viktor Frankl while describing what is, so far as I can tell, a plan to steal shit from other websites while keeping your ass covered legally.
My question: instead of "scraping" from other websites—"scraping" being trade talk for taking their stuff while ensuring they get nothing out of it—why can't we just revert to the old method of "transcluding" their content. Transcluding means that everyone on Jewcy gets to read their stuff, but they still get their pageviews and advertising revenue.
Transcluding seems to have gone of out fashion sometime in internet pre-history (the 90s? Is that possible?), but it seems like a more effective, less labor-intensive, and vastly fairer way to poach proprietary content.
You can't transclude a New York Times page, because they have some sort of fancy technical barrier set up. So in the spirit of ethnic fraternity I'll just sample the content of someone closer to home.