‘Facebook on Decline?’

The Facebook business model – how sustainable is it really?

Think about the amount of time you spent on the site when you initially joined.  It was quite a bit right?  There were so many people to ‘friend’, so many lost acquaintances from college, high school and grade school.  But then you connected with all of your friends, both past and present.  If it was anything like my experience, your high school class formed a group, that you and everyone else joined, and you quickly added everyone in the group.

Done.  Now what?

The article, ‘The Beginning of the End for Facebook?’ written by Ben Barjarin essentially argues this point based on some really great examples.

“[Ben] recently polled almost 500 high school students in San Jose, and shockingly, not all of them were on Facebook. But perhaps not surprisingly, nearly all who were said they were basically bored with the site and had been using it significantly less.”

If you’ve already searched out all those long lost friends, you may have entered the ‘content management phase’ of Facebook where you are essentially maintain and occasionally updating existing profile information.  Not exactly exciting and the polling completed by Ben would seem to agree.

With Facebook is looking to complete an IPO next year, the longevity of its business model seems to need evaluation or at least an analysis on the usage life-cycle of Facebook customer.  I’m sure this evaluation is not far away as plenty of investment analysts will want a chance to value the social media behemoth.

And I don’t know about you, but I will be very interest to see the results.

Is anyone else seeing similar reports?

Source:

http://techland.time.com/2011/12/05/the-beginning-of-the-end-for-facebook/#disqus_thread

Advertisements

Playing Devil’s Advocate for Online Plagiarism Checkers

Let me begin by stating upfront that I do not support plagiarism.  This blog posting is to play devil’s advocate to the current systems which compare a document to online sources as well as sources within internal databases.

Here’s an example: four graduate students, at a major mid-western university, work for hours writing a final position paper for their capstone finance class.  The position paper is discussing a recent HBS case the team has analyzed throughly.  Locked away in a conference room in the business school, they create the document by collaborating, choosing each word carefully and discussing the meaning of each sentence because of the limited work count and the importance of the final paper.  Their professor demands that each paper be uploaded to a website online for submission.  The website will automatically check for plagiarism before the document is submitted.  The website gives the students the option (for a small fee!) to see the results of the plagiarism check before the document is delivered to the professor.

One of the students decides to pay the fee to check the final paper.  The professor has demanded a plagiarism score of less than 5% (not unreasonable).  The other teammates scoff as the team has been working together this entire time, carefully crafting each sentence to maximize meaning and space, there is no way the score could be over 1%…

Result: 14% plagarized

Had the team submitted the paper, without paying the fee, they would have violated university policy and may not have received their diplomas.

So what is the issue?  How did an original work result in a score of 14%?

The online system in question, Turnitin, scans the internet for sentence order and word frequency within a given sentence.  Additionally the system stores all previous submissions to create a database of original works to compare the document against.  The idea being that a student at University A might be able to copy another student’s work from University B.  The system builds a database to combat this problem.

However this system creates a new problem, original work is becoming harder to produce because of the immense volume of information currently on the internet and the growing volume within internal databases.

My own theory on this problem is that because so many schools are using the same HBS, Darden and Ivey (just to name a few) business cases, students are creating effective solutions and frankly there are only so many ways to state the same information.  How diluted does the meaning of a sentence become by modifying the structure just to make it original.  Furthermore with each minor modification, the system now has a new document to compare against making it increasingly difficult for future students.

I don’t have a solution to this problem.  Plagiarism in academia is pervasive, and reducing the number of comparison documents doesn’t seem like a viable solution as given in the University A & B example above but as the volume of information on the internet continues to grow it already appears that creating original work (especially based on a limited case scope) is becoming increasingly difficult.

Does anyone agree / disagree?

Source:

http://cyberdash.com/plagiarism-detection-software-issues-gvsu

http://www.plagiarism.org/plag_article_did_you_know.html

http://www.plagiarismtoday.com/2008/12/16/review-the-plagiarism-checker/

http://www.internetworldstats.com/emarketing.htm