The second thing I wanted to blog about is what I’m doing this term. I’m really excited about most of it; only one module is going to be a real struggle – I’m already prepared for that and am doing lots of extra reading.
The first one is Further Web Science which is basically exploring lots of web-related issues with a view to being able to write a report on a specific topic as if we were academics reporting to, say, a government select committee or similar. So far, the topcs covered have been advice for students on current issues such as access to medicines online, so-called ‘revenge porn’ and other aspects of cybercrime. This was followed by data privacy versus the need to analyse ‘big data’ in real time. Next it’ll be hacktivism and more cybercrime. We’ve already had a lecture from an academic who regularly writes reports at the request of the government. This is most excellent!
Then comes Computational Thinking. This will involve some programming using Python on an actual Respberry Pi that we get to keep afterwards. This is also excellent.
The Science of Social Networks is equally awesome. The aims are:
- To understand how to measure the performance and behaviours of social networks
- To understand the impact of social networks on different domains
- To understand the challenges and affordances of social networking for society
…and finishes up with presenting a portfolio of work (carried out in groups) supporting an idea for a new social network app followed by a dragons-den style presentation. This will be really important for me, as the analysis of social networks is something I’ll be doing for my project to complete my MSc, and my PhD.
Finally, there’s Semantic Web Technologies. This will be the hardest one for me, as it involves the technical architecture of the web as it is developing for a future where databases are linked together and searchable in ways that go far beyond the way Google and other search engines work. The language that is read by Firefox or whatever other browser you’re using has to be added to in order to make it possible for machines (computers) to read data. You might have a fabulous database (probably a spreadsheet) of all your music CDs, but what if you could link your database to someone else’s, one on, say, album covers? And someone else’s about bands and artists? And someone else’s about graphic designers and/or photographers? Link these together, and you have something really rich with information for the user, and no need to bookmark ten different sites because your broswer, together with a search engine, has delivered this to you.
A whole new language has had to developed to facilitate this, and that language has been standardised so that everyone does the same thing. Then there’s the problem of how different databases categorise different types of data – this must be made explicit to the bit of software reading the information, and so there are specific ways of doing that as well. If you’re lost by now, I know how you feel, but I may blog about it again on here if only to try and get things clear in my own head.
Oh, and the marks for last term are coming in, and I’m clearly going to need to do better this time around so that I have a decent buffer zone.