A Gentle Introduction to Algorithm Complexity Analysis

A Gentle Introduction to Algorithm Complexity Analysis.

Theoretical computer science has its uses and applications and can turn out to be quite practical. In this article, targeted at programmers who know their art but who don’t have any theoretical computer science background, I will present one of the most pragmatic tools of computer science: Big O notation and algorithm complexity analysis. As someone who has worked both in a computer science academic setting and in building production-level software in the industry, this is the tool I have found to be one of the truly useful ones in practice, so I hope after reading this article you can apply it in your own code to make it better. After reading this post, you should be able to understand all the common terms computer scientists use such as "big O", "asymptotic behavior" and "worst-case analysis".

Stanford Online – CS224N – Natural Language Processing

Stanford Online – CS224N – Natural Language Processing.

Students develop an in-depth understanding of both the algorithms available for the processing of linguistic information and the underlying computational properties of natural languages. The focus is on modern quantitative techniques in NLP: using large corpora, statistical models for acquisition, disambiguation, and parsing. Word-level, syntactic, and semantic processing from both a linguistic and an algorithmic perspective are considered…

Dictionary + algorithm + PoD t-shirt printer + lucrative meme = rape t-shirts on Amazon

Dictionary + algorithm + PoD t-shirt printer + lucrative meme = rape t-shirts on Amazon. Pete Ashton explains how programmers can accidentally generate pro-rape t-shirts to sell on Amazon (by swapping a list of all English verbs into the “Keep Calm and ____ On” meme), but then draws a conclusion about digital literacy that I think is exactly backwards.

This is a great example of what I think Digital Literacy should mean. The world around us is increasingly governed by these algorithms, some annoyingly dumb and some freakishly intelligent. Because these algorithms generally mimic decisions that used to be made directly by people we have a tendency to humanise the results and can easily be horrified by what we see. But some basic understanding of how these systems work can go a long way to alleviating this dissonance. You don’t need to be able to write the programmes, just understand their basic rules and how they can scale.

Is he suggesting that the problem here is that non-programmers don’t understand enough about algorithms? I think the problem here is that the algorithm’s creators didn’t think enough about the context of their program, aka the real world.

This reminds me of Falsehoods Programmers Believe About Names and Time. Apparently we need a conversation on Falsehoods Programmers Believe About Words, too.

Latanya Sweeney’s name produces a different view than yours.

Latanya Sweeney’s name produces a different view than yours. Another example of how a “passive” or “neutral” algorithm will reflect racism and other biases from its context. Nice to see a response to this situation that asks how algorithms can reflect better politics, rather than asking how people can be more forgiving of the programmers who didn’t think about this.

Professor Latanya Sweeney found that searching “Black-identifying” names like hers resulted in Google.com and Reuters.com generating ads “suggestive of an arrest in 81 to 86 per cent of name searches on one website and 92 to 95 per cent on the other.” This means that when Professor Latanya Sweeney (who has no criminal record) googles herself, or when anyone googles her, one of the top results is “Latanya Sweeney: Arrested?” According to the study, when we google the names of Black-identifing names, we’re very likely to see the words “criminal record” or “arrest.” That view sucks! And it only serves to edify negative stereotypes, which potentially limit people with “Black” names from accessing equal means of sustenance and amenities. Meanwhile, googling a white-identifying name produces “neutral” content. (The ads that come up when I google my own name offer viewers private information for a fee.)

And it is how this digital view is shaped that is most disturbing: Google assures that there is no racial bias in the algorithms they use to position ads. Rather, the algorithms “learn over time” which ads are selected most frequently and then they display those. The algorithms are simply reflecting the dominant values of our time, but demonstrating them to each of us differently, depending on our own particularities, and from what is know from our individual and collective clicks: these algorithms cannot result in a more panoramic view. So, thank you to Latanya Sweeney for rubbing the fog off of my view, for now at least. Otherwise, because of my race, and my name, I may not have seen the racist outcomes these algorithms are producing.