19 November 2011

Maintaining your OLI - the problem

Earlier, I argued that online learner identities (OLIs) are pivotal in a learning ecology that is personal in that it takes the learner as its starting point and is social in that it puts this starting point at the centre of an online network of people with kindred interests (see more on this Berlanga and Sloep, 2011, Towards a Digital Learner Identity). Such an ecology thrives on the services with which it is populated. Such services come in different kinds, but these are the main categories:

• social services - they are services that intelligently match a learner with others in his or her social network; other learners come in a variety of roles, such as fellow learner, team buddy, coach, mentor, tutor, supporter, supervisor, assessor, etc., basically all the different roles teachers in ordinary formal education adopt, and a few more

• content services - they are services that match a learner's learning objectives or needs with content that could help fulfill those needs; such content will often be in the form of (preferably openly accessible) documents (explicit knowledge), but could also be in the form of implicit knowledge, only accessible by approaching the people who bear this knowledge.

The people in your social network are good candidates to fulfill the various roles in your learning ecology. And your search behaviour speaks to the things you want to learn as do, say, your blogs and wikipedia entries; they also reveal your level of expertise. Presumably, the more detailed the data about your network, about your search behaviour, your posts, tweets, etc., that is, the richer the description of your OLI, the better the social and learning services would be able to facilitate your learning. So, learning benefits from a rich OLI description.

Providing such a rich description, however, poses a privacy risk. The risk may be as grave as to result in identity theft, that is,  in somebody intentionally posing as some other person whose personal data have been stolen with the intention to harm that individual; or the risk may be moderate as when two similar but different individuals accidentally, without harmful intentions become mixed up. So the individual learner is faced with a dilemma. She should reveal everything about herself as this improves the learning experience, but she should reveal nothing at all to lower the risks involved with privacy loss. How can this dilemma be tackled? The answer is that a learner should be able to provide differential access rights to her OLI data: different groups of people get different rights. Thus, people whom one has grown to trust are provided with more rights that complete strangers. Also perhaps, people affiliated with a well-known educational institution are endowed with more rights. Etc. In this conception, controlling one's privacy is equivalent to controlling the access rights to one's data. In a next installment I will explain a schema for how this could in principle be achieved technically. However, and this is the topic of the present post, implementing any such solution which puts a user in control of her OLI data, is hard if not impossible to achieve in the current social web.

First, in the current social web data are provided freely. Web users provide them in exchange for the services that social web sites provide. So Google allows people to carry out searches, in return for the searcher's consent to Google to collect and compile a user profile, which furthers their commercial interests. And something similar goes for Facebook, Twitter, etc.  Although in principle you may decide not to agree with such schemes, in practice this is no more an option than disconnecting yourself from the electricity grid. If you want to search you use Google, if you want to make online friends, you use Facebook, if you want to microblog, you use Twitter; etc. Second, the data are fragmented as they are scattered over various sites. This nature makes controlling them harder as you need to visit multiple sites. Moreover, sites such as Google, Facebook, etc. are walled gardens, they do not let your data escape, again because those data are the very foundation upon which their business rests. So, they are not just fragmented but your data are also deliberately kept out of your control. Clearly, in the face of this, no individual person really stands much of a chance to control his or her personal data, that is, ultimately also his or her privacy (see my earlier post on this issue).

Interestingly, even scarily if you think about it, the issue of privacy does not seem to bother the majority of the Internet users. The discussion on privacy occasionally flares up, for instance when privacy settings turn out to reveal more than previously as a consequence of a license update (Facebook) or when location data on private wifi networks turn out to have been collected and stored (Google). But the big picture of the massive amounts of data that already have been collected and stored, are used on a regular basis, fails to upset people. Wrongly so, as I have argued.

[adapted and updated December 29, 2011