Tag Archives: Google

Developing digital skills with UpScale

This blog was written by Frederic Kalinke, an ex-Googler who is now Managing Director of agile marketing technology company Amigo.digitaltechoriginal

I am a big fan of the UpScale programme at Birkbeck, which inspires students to work in the wonderful world of digital technology. Several big brands like LinkedIn, ASOS, JustGiving and MediaMath are partners, offering dedicated seminars to aspiring students. I have delivered a number of workshops focused on the power of Google and online marketing. In this article, I want to share why I believe UpScale is so important, as well as some tips on how to learn digital skills effectively.

I started my career at Google. Besides overdosing on sushi and chocolate, I learnt everything there is to know about Google’s marketing tools, which help businesses acquire customers online. I was also lucky to discover a passion so early. The thing that got me out of bed in the morning was developing novel and effective ways to teach companies about how Google products work. Before I dive into these, it’s worth spending some time exploring why working in technology is a fantastic place to be.

Never get bored

The UpScale programme focuses exclusively on the digital technology sector. Why? The UpScale website talks about employer demand. As the world gets increasingly digital, companies will continue to require and reward people who have technical skills and interests. This is undeniably true. You only have to look at the market salaries for software developers, data scientists and digital marketers to understand that demand for digital talent outstrips supply.

I would argue, however, that there is an intrinsic reason why technology is a fantastic career choice: it never gets boring! By nature it constantly evolves and never lies still. Here’s a clear example. Before the internet, the hotel, taxi, retail and entertainment industries remained largely unchanged. Hoteliers and taxi companies enjoyed oligopolistic privileges so could charge whatever they wanted to customers; high street shops enjoyed healthy margins based on the fact that customers had no other choice but to purchase their goods and services from them; and content producers, movie distributors and cinemas moved in lockstep, creating a profitable triumvirate. Then the internet arrived. And so did AirBnB, Uber, Amazon and Netflix, which have completely transformed their respective industries. It’s mind-boggling to think that two of these companies did not even exist 9 years ago. And none of them existed 23 years ago.

I was given the recommendation to work in digital by a wise CEO of a large FMCG company whom I met at university. He told me to forget the FMCG (Fast Moving Consumer Goods) sector as, despite its name, was the “commercial snail”. It turns out that washing powder and toothpaste don’t really change that much.

So if you want excitement and constant innovation, digital technology will not disappoint and UpScale will equip you with the skills and networks to help get you there.

How to learn digital effectively

Having established the significance and thrill of working in technology, I’d now like to outline three ways to learn digital skills effectively. These insights are based on my experience of running several UpScale workshops.

  1. Interactive learning: From the very start of my workshop, I involve everybody in warm-up exercises and thought experiments to get people thinking. I am a big believer in the saying that if you “tell somebody to do something they will forget, if you show somebody they will remember, but if you involve somebody they will understand”. Because digital technology touches every part of our life, I advise students to get together in small groups to debate digital and challenge each other with questions like: why is Amazon so successful? Why is Twitter’s stock price so low? If you had £100k, what business would you set up and why? Why is using data important in decision-making? Which industry will be disrupted by technology next?
  1. Metaphors: I use a lot of metaphors to teach digital marketing concepts. For example, when we look at keyword planning, the bedrock of Search Engine Marketing, I use fishing and football; when we discuss Website Optimisation, I use the metaphor of a great restaurant. Metaphors make new things memorable and familiar. I always advise students to devise their own metaphors for newly learnt subjects and try them out on friends. As the Feynman Technique tells us, explaining something to a newbie is the best way to master any topic.
  1. Get practical: The last part of my workshop is about applying theory to practical exercises. Participants create their own Google AdWords campaign for an industry of their choosing. In whatever technical subject you are learning, there is always a practical application. If you’re learning a computer language, grasping data science or building a Microsoft Excel dashboard, get stuck in by building something. You will be amazed at how much this aids the learning process.
Share

Google’s new NMT speaks its own language

This post was contributed by Alan Mosca, a PhD student in Birkbeck’s Department of Computer Science and Information Systems. Alan tweets at @nitbix

A Google research group has announced a breakthrough that could have a deep impact on the field of automated translation of documents and web pages.

In the recently released article “Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation” they show how their Neural Machine Translation (NMT) system is able to perform translation between pairs of languages, for which the system has never seen any examples.

In practice, this means that Google’s system is able to automatically translate between two languages, without adopting the “trick” of interlingual translation. (Interlingual translation is a technique commonly adopted in machine translation, of using a common intermediate language to bridge two languages for which there is no corpora available. In this example, the translation would be French -> English -> German, and vice versa, using English as the bridging language). This occurs through a common deep learning method called Long-Short Term Memory (LSTM), through which a machine can learn how to translate between, say, English and French and English and German by processing examples of translations.

The exciting development is that all of this is achieved in a single model, which is able to operate on multiple language pairs. It even appears to have had the effect of the model developing its own “internal representation” of concepts, which is completely independent of the specific languages it learns to translate. The examples in the paper are not limited to European languages, either – the system is able to translate between Japanese and Korean without seeing a simple example that joins the two languages. An example of how this works is shown in Fig. 1.

Fig.1: Example zero-shot translation after training on an intermediate language

Fig.1: Example zero-shot translation after training on an intermediate language

 

All of this, of course, is done inside a deep learning model: an LSTM. The multi-lingual translation is achievable in the single model by adding a token for the destination language in the input. For example, if one wanted to translate “Hello, my name is Bob” to Spanish, the input would be “<2es> Hello, my name is Bob”.

A further exciting observation made by researchers from Google Brain is that the system does not need to be told what language the input is in, disambiguating the difficult cases on its own. Take the word “burro” for instance: it means “butter” in Italian but “donkey” in Spanish. Even for words that have the same spelling but different meanings in different languages, the system is usually able to discriminate based on context.

The model learns an “encoder” LSTM and a “decoder” LSTM; it has a similar appearance to multi-layer auto-encoders. The centre contains an attention model, and the layer just before the attention is the one that outputs the “common encoding”: a semantic representation of the input that is language-independent.

Being Google, as well as testing on the benchmark datasets in machine translation, they used their own internal dataset, which is probably very large and certainly very private. The code is very private too, but the researchers have given us an insight into the kind of infrastructure they needed: 100 (presumably state-of-the-art) GPUs, trained for over 3 weeks. The results are impressive, beating state-of-the-art ad-hoc models in a few cases. For a single model developed for multiple languages, Google’s NMT system provides a great advantage, and we should expect ever better translations from Google Translate as a consequence.

 

Share