Whereas many advances have been made in the past few years in computational techniques and methods for the study of massive amounts of online available data, research in the field has been striving with the challenges posed by the visual component of these objects. Although computer vision techniques have been known for decades, their limits as automated annotation tools – or the prohibitive costs and technical complexity of more recent adequate approaches – have been hindering their application in the humanities and social sciences. Addressing this matter, the course will present and discuss a set of methods and tools for studying large visual datasets which are based in the recent availability of cloud computing computer vision frameworks, dealing specifically with the Google Vision API. Possibilities and limitations of these techniques will be discussed, in addition to the learning of their implementation in research practice.
˚ ˚ Preparation ˚ ˚
Participants are encouraged to prepare for the workshop by making themselves acquainted with Gephi, DMI Tools (in particular: Google Image Scraper, Google Reverse Image Scraper, Image scraper, and Data collection tools such as Netvizz or TumblrTool), SciencesPo Médialab tools (in particular: Table2Net and Catwalk), and with very basic command line procedures (change directory, list contents).
It is important to note that the workshop will focus on analysis procedures, not encompassing the data extraction steps that precede them. Thus, although sample datasets will be provided for the participants, they are strongly advised to explore data extraction tools for the type of data and platforms they might be interested in and to collect data they would like to work with prior to the activities. The data file should be simple text files containing comma-separated or tab-separated spreadsheets (CSV or TAB files), and one of the columns should contain URLs of still-image files (jpg, png etc.). Beware that it should be the URL to the image file not for the full post (e.g., this, not this).
Please, note that the workshop will use Google Vision API, which is a paid service not included in the workshop fee. Costs for workshop activities tend to be very low (i.e. less than 10 USD) and may even be free, since there is a free monthly quota and, also, since new users of Google’s Cloud services are granted an initial credit for starting to use it. However, participants will be required to create their own account in the service, which will imply assigning a credit card. A least a week prior to the workshop dates, instructions will be sent to the participants with instructions on creating that account, as well as on software to install and other possible requirements.
˚ ˚ Applications and Tuition Fee ˚ ˚
Please send an email to smart.inovamedialab[at]fcsh.unl.pt with your CV (with photo) and a brief statement introducing your research interests and explaining how this workshop may benefit your current work.
The deadline for applications is 8 January 2018. New deadline is 21 January 2018.
The cost of “Image Networks” is 160 euros for the general public. There is a different fee for NOVA students (135 euros).
There is a special fee for those who are interested in SMART Data Sprint:
|Courses||Deadline for applications||Tuition Fee|
|Image Networks: Automated Analysis of Visual Content||
New date 21 January 2018
EUR 160 – [all participants]
EUR 135 – [NOVA students]
SMART Data Sprint: Interpreters of Platform Data
Image Networks: Automated Analysis of Visual Content
New date 21 January 2018
EUR 385 – [all participants]
EUR 325- [NOVA students]
André Mintz –PhD candidate in Communication Studies, Universidade Federal de Minas Gerais (Brazil), researcher of the Intermedia Connections Research Group (NucCon) and CAPES Foundation scholarship holder. MA in Communication Studies, Universidade Federal de Minas Gerais (Brazil). MA in Media Arts Cultures, Aalborg University (Denmark), Lodz University (Poland) and Danube University Krems (Austria).