Pedro Alves Nogueira, Developer in Porto, Portugal
Pedro is available for hire
Hire Pedro

Pedro Alves Nogueira

Verified Expert  in Engineering

Computer Vision Developer

Location
Porto, Portugal
Toptal Member Since
February 13, 2015

Pedro is a senior researcher and prototype developer with a PhD in AI, human-computer interaction, and affective computing. His background in academic and startup environments give him an edge in implementing state-of-the-art, elegant, and efficient custom-built solutions. Additionally, his experience as director of engineering for a multi-million dollar startup makes him an expert communicator and project manager.

Availability

Part-time

Preferred Environment

Git, Sublime Text, Eclipse, MacOS

The most amazing...

...thing I’ve created is a hybrid computer vision/machine learning system to automatically detect cellular infection ratios for Leishmania drug trials.

Work Experience

Director of Engineering

2015 - PRESENT
Toptal, LLC
  • Coordinated the launch of the Artificial Intelligence and Data Science verticals for Toptal. This involved building the vetting processes, hiring the recruiting teams, internal sales and operations training, and executing the launch, PR, and growth initiatives.
  • Gathered client requirements and expectations and, based on that, interviewed and filtered the best candidates.
  • Tracked and improved internal processes to follow company growth.
  • Helped clients improve their remote workflows, organize proper communication needed for remote work and executed initial scope/business analysis.
  • Improved hiring practices and fine-tuned screening processes leading to a more rigorous and strict filtering of new talent.

Professor

2013 - PRESENT
University of Porto - Faculty of Engineering
  • Taught various courses on introduction to programming (Scheme/LISP), computing theory, and advanced programming (C/C++) for the Masters in Computer Science program. Also responsible for routinely conducting several seminars and workshops on advanced AI, HCI, and digital game topics.
Technologies: User Experience (UX), Lisp, Scheme, C++, C

Researcher

2012 - PRESENT
Artificial Intelligence and Computer Science Laboratory
  • Developed a generic architecture for designing affective game engines as well as affective player modeling algorithms based on emotional reaction data for digital video games, both presented at AAAI's annual conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE).
  • Created a symbolic game simulator that enabled testing for the emotional elicitation capabilities of created models.
  • Created tools to automatically collect, pre-process, analyze, and visualize psychophysiological data from gameplay sessions as well as annotate emotional reactions using HCI best practices and protocols.
  • Performed in-depth live studies with human participants to determine effects of real-time, affective, rule-based adaptive video games on players' perceived immersion, gameplay experience, and physiological data. These tests were also followed by thorough statistical significance and data visualization analysis via custom-made visualization tools.
  • Coordinated parallel work in biofeedback video games, affective-adaptive movies, full spectrum sensorial integration prototypes, affective procedural content generation, and emotional NPC research.
  • Built a generic multi-modal framework to add customized natural interfaces to existing games. Interaction modes include Kinect gesture recognition and speech recognition.
  • Created the four-year project vision and single-handedly secured full funding for its full development.
  • Established and developed international cooperation with well-known Gamer Lab group at the University of Ontario’s Institute of Technology.
Technologies: Machine Learning, Java, C#, Python, R

Computer Vision Expert

2015 - 2017
RAD FitKey
  • Designed a computer vision algorithm for extracting body size measurements from still pictures.
  • Researched and consulted with the client regarding potential alternate avenues (e.g., adding 3D scene reconstruction capabilities).
  • Supervised and provided guidance on the development efforts and technical implementation details of the algorithm.
Technologies: C++, OpenCV

Consultant - MASSIVE R&D Project

2014 - 2016
INESC-Tec
  • Performed requisite and technological analysis regarding multi-sensorial and physiological interaction techniques for a full-sensorial simulator.
  • Aided in the scenario and objective/deliverable delineation process.
  • Performed budgeting and material acquisition decisions.
  • Managed two teams responsible for two high-visibility scenarios and disseminating results in peer-reviewed venues.
  • Showcased the project and gaining exposure.
Technologies: Python, MySQL, C#, Java

Recruiting Manager

2014 - 2015
CleverTech
  • Oversaw the full cycle recruiting process, from sourcing and interviewing to offer negotiation and on-boarding.
  • Implemented the technical testing processes and the scoring system. Tasked with performing hires for high-profile clients.
  • Helped maintain internal operations through team building and resource allocation policies.
  • Implemented added functionality on internal hiring tools and provided technical supervision on parallel feature development.
  • Kept in-house KPIs on the hiring process (e.g., churn rate) to help predict new hiring needs and talent acquisition estimates.
Technologies: MongoDB, Node.js, AngularJS

Professor

2012 - 2013
University of Porto - Faculty of Letters
  • Taught several relational DB, introduction to programming (Python) and geographical information system (GIS) courses for the Masters in Geographical Information Systems program.
Technologies: MySQL, Python

Developer

2011 - 2011
Virtual Embodiment and Robotic re-Embodiment (VERE), Telecommunications Institute (IT)
  • Developed eye-tracking methodologies for head-mounted displays and ocular movement replication in 3D avatars.
Technologies: C++, Python

Computer Vision & Machine Learning Developer/Analyst

2010 - 2011
Molecular and Cellular Biology Institute (IBMC)
  • Performed on-field requisite analysis regarding annotation protocols and image characteristics.
  • Developed a fully automated model for determining infection rates in Leishmania infected confocal microscopy imaging.
  • Conducted statistical validation procedures on the method, showing it outperformed trained human experts, potentially sparing thousands of manual annotation hours per laboratory team/year.
  • Contributed to several publications on peer-reviewed conferences and journals, such as the European Conference on Neural Networks and the Artificial Intelligence Review journal.
Technologies: Java

User Experience and Human-Computer Interaction Researcher

2009 - 2010
Vital Responder (Carnegie Mellon University - Portugal Programme)
  • Performed on-field contextual studies and requirement analysis of fireman activity for the development of novel wearable intelligent biometric monitoring suits and the emergency response framework these were integrated in.
  • Employed techniques that included, but were not restricted to: depth interviews, focus group interviews, shadowing, visual anthropology, questionnaires, mental model formation, paper prototyping, cognitive walkthroughs, and requirement elicitation.
  • Contributed to a final report specifying the stakeholders' main requisites, current activity practice flaws, and possible improvements, followed by use cases, scenario workflows, and an initial prototype proposal.
Technologies: Human-computer Interaction (HCI), User Experience (UX), Python, Java

Network Coding Researcher

2009 - 2009
NCrave European Project, Telecommunications Institute (IT)
  • Studied the impact of imperfect feedback on state-of-the-art network coding protocols on the European project NCrave.
Technologies: Python

Computer Vision Expert @ RAD Fitkey

Helped build the Computer Vision algorithm for an iOS app designed to automate body size measurements.
The solution uses a combination of Computer Vision and anthropomorphic science to deduct the positions and measurements of several body points from still images taken by the users and matches these to clothing vendor's item sizes for a clear-cut shopping experiences.

Monte-Carlo AI for The Octagon Theory

Defined and supervised the implementation of a novel MCTS-based AI for The Octagon Theory, a game with a complexity orders of magnitude above chess.

MCTS AI was also later dotted with Bayesian opponent modelling capabilities that allowed it to not only surpass any known human players but also all other known/previously implemented AIs. Work was featured at AIGameDev.com (https://aigamedev.com/broadcasts/session-mcts-tot/).

Automatic Analysis of Microscopy Leishmania-Infected Cellular Images

In this project, I single-handedly developed a fully automated processing pipeline for processing confocal microscopy imaging using computer vision and machine learning techniques. The process was adopted into existing software to save thousands of manual annotation hours on drug research trials aimed at identifying a cure for Leishmania. The process is completely parameterisable to account for different magnification levels, signal-to-noise ratios and background assumptions, achieved partly due to a modular architecture. The project was implemented fully in Java, with all core computer vision algorithms manually implemented, optimized, and tuned, and models were trained in Weka (and later imported into the Java code).

Hiring System

http://hire.clevertech.biz/
Implemented new features and performed bug fixes on CleverTech’s internal hiring system. Also defined and provided technical supervision on parallel and subsequent development. The system was responsible for processing all new applications to the company’s job openings, as well as perform applicant tracking, rating and interview process reporting. Employed technologies include: Angular.js, Node.js, MongoDB and Twitter Bootstrap.

Psychophysiological Inductive Emotional Reaction

Developed a suite of parallel, multilevel models that were able to transform an individual’s physiological data (skin conductivity, heart rate and facial muscle activation) into emotional states (arousal and valence). Employed methods include, but were not limited to: linear/polynomial regression, decision trees, artificial neural networks, support vector machines, fuzzy rules, random forests, and (weighted) ensemble models. Development was done in R and Python for fuzzy rules and data pre-processing.

Emotional Event Triangulation Tool

A video/physiological data synchronization and processing tool for aiding in direct observation UX studies. The tool integrated a configurable emotion detection system previously developed to process the physiological data into emotional states for visualization. Users could annotate relevant events from a (user-defined) list of existing events and add optional detailed feedback to each event. Once annotated, the session could be saved in a specific format (csv or eet) and later loaded for further inspection/validation by other team members. The main selling point of the tool was the ability to employ a peak detection algorithm to automatically determine emotional reactions to the annotated events using the physiological/emotional data. This reduced intra- and inter-personal bias while also saving man-hours. The project was implemented in C#, also using ZedGraph.

Affective Player Modelling

This work involved the development of player models' affective reactions in regards to specific game events based on observed past reactions. Related activities involved analyzing the collected data, testing several feature extraction and selection algorithms, and machine learning classifier training. We finally settled on a more structured/grounded approach that took into account the data’s nature using a matrix of regression models for each event/emotional dimension. Having dealt with idiosyncratic reactions, scaling issues and data sparsity through this approach, the created models were then used to create distance metrics between player pairs, enabling a hierarchical clustering approach followed by a fuzzy cluster approximation formula. Development was done in R, but the models were later fully mapped so they could be loaded into memory for direct access (see GOAD project). For details see: http://www.aaai.org/ocs/index.php/AIIDE/AIIDE14/paper/viewPDFInterstitial/8947/8939.

Gameplay Optimization & ADjustment (GOAD) Simulation System

Developed a simple, symbolic simulator to automatically play out all possible variants of a digital game with a specific range of configurations. This simulator was used to automatically assess the effectiveness of suggested gameplay adaptations dictated by affective player models. Given a model of how a specific player might react to game events, this model identifies the set of best gameplay parameters to drive the player towards a specified emotional state or pattern. The project was fully implemented in Java and ran on a headless mode (configuration files were generated by a separate GUI). This allowed us to speed up the system, which generated over 36GB or pure text log data on a 1 hour simulation run over several hundreds of millions of simulation steps.

A Capella Mixer

This is a web application that enables users to generate an "A Capella" for any given music track. To do this, the user needs only to chose an audio or movie file (the audio track is automatically extracted) and load the lyrics he wants to embed in the music track. The application will then synthesise the given lyrics, apply a parameterisable pitch deformation algorithm (i.e. auto-tune the lyrics) and generate a new audio/video file that combines the music track, lyrics and video - if available.

Gemini

In recent years, video game companies have recognized the role of player engagement as a major factor in user experience and enjoyment. This encouraged a greater investment in new types of game controllers such as the WiiMote, Rock Band instruments and the Kinect. However, the native software of these controllers was not originally designed to be used in other game applications. This work addresses this issue by building a middleware framework, which maps body poses or voice commands to actions in any game. This not only affords a more natural and customised user-experience but it also defines an interoperable virtual controller. In this version of the framework, body poses and voice commands are respectively recognized through the Kinect's built-in cameras and microphones. The acquired data is then translated into the native interaction scheme in real time using a lightweight method based on spatial restrictions. The system is also prepared to use Nintendo's Wiimote as an auxiliary and unobtrusive gamepad for physically or verbally impractical commands. System validation was performed by analyzing the performance of certain tasks and examining user reports. Both confirmed this approach as a practical and alluring alternative to the game's native interaction scheme. In sum, this framework provides a game-controlling tool that is totally customizable and very flexible, thus expanding the market of game consumers.

Featured on Dedicated Broadcast on AIGameDev

Invited to participate in a live, premium broadcast on AIGameDev pertaining our on MCTS AI algorithms for The Octagon Theory.

AAAI Member

Been a member for the AAAI society for 2 consecutive years now due to continued publications in their conferences.

Scientific Advisor

Successfully supervised over 10 M.Sc. theses in the digital games and game theory domains with an average of A rankings.

Representative sample of topics include:
- Direct biofeedback shooter games
- Affective movies
- Biofeedback horror games
- Emotional NPCs and human player mimicking
- Affective PCG
- MCTS for complex board games (orders of magnitude above chess)
- DBF framework
- Stealth games mcts
- PCG for platform games

Languages

R, Python, ECMAScript (ES6), Java, JavaScript, C, C++, Scheme, C#, PHP, Lisp, Prolog

Libraries/APIs

Scikit-learn, OpenCV, Node.js, React, jQuery

Tools

Weka, Adobe Photoshop, Microsoft Visual Studio, Photoshop CS6, Robo 3T (Robomongo), OmniGraffle, Sublime Text 2, Sublime Text, Shell, Mongoose, Git, Eclipse IDE

Paradigms

Qualitative Research, Data Science, Imperative Programming, Object-oriented Programming (OOP), Agile Software Development, Human-computer Interaction (HCI), Functional Programming

Industry Expertise

Project Management

Other

Signal Processing, Technical Project Management, University Teaching, Neural Networks, Consulting, Research, Research Reports, Data Research, Quantitative User Research, Support Vector Machines (SVM), SVMs, Clustering Algorithms, Hierarchical Clustering, Data Mining, Computer Vision, Machine Learning, Data Visualization, Data Analysis, Data Engineering, Data Modeling Expert, Data Modeling, Natural Language Processing (NLP), Chatbots, Monte Carlo, Audio Processing, Convolutional Neural Networks (CNN), Random Forests, Ajax, Dia, GPT, Generative Pre-trained Transformers (GPT), User Experience (UX), Recurrent Neural Networks (RNNs), Markov Chain Monte Carlo (MCMC) Algorithms, Process Simulation

Frameworks

Bootstrap 3, Bootstrap, Express.js, AngularJS

Platforms

Meteor, iOS, MacOS, Windows, Linux, Eclipse, RapidMiner

Storage

MongoDB, JSON, MySQL

2011 - 2016

Ph.D. (Summa Cum Laude) in Artificial Intelligence/Human-Computer Interaction

University of Porto - Faculty of Engineering/University of Ontario - Institute of Technology - Porto, Portugal

2009 - 2011

Master's Degree (Summa Cum Laude) in Computer Science (Cryptography & Artificial Intelligence)

University of Porto - Faculty of Sciences - Porto, Portugal

2006 - 2009

Bachelor's Degree in Computer Science

University of Porto - Faculty of Sciences - Porto, Portugal

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring