Scroll To View More
Hire the top 3% of freelance developers
Daniel Burfoot

Daniel Burfoot

Berkeley, CA, United States
Member since June 8, 2017
Daniel is an experienced software engineer, data scientist, and NLP researcher with an expertise in Java programming and a Ph.D. in machine learning. He has worked with Hadoop, the AWS cloud, SQL databases (MySQL/PostgreSQL), front web programming in HTML/JavaScript, machine learning algorithms, Python, and more.
Daniel is now available for hire
  • Java, 12 years
  • Linux, 8 years
  • SQL, 6 years
  • JavaScript, 6 years
  • Machine Learning (ML), 6 years
  • PostgreSQL, 4 years
  • Natural Language Processing (NLP), 4 years
  • Hadoop, 3 years
Berkeley, CA, United States
Preferred Environment
Java, PostgreSQL, Resin, REST, React, Linux, AWS
The most amazing...
...project I've built is a combined sentence parser and text compressor; the former finds the parse tree that produces the shortest code length for the latter.
  • Founder
    Ozora Research
    2014 - PRESENT
    • Developed machine learning algorithms for sentence parsing and modeling.
    • Designed, developed, and performance-tuned back-end SQL databases.
    • Worked on the user interface and visualization for the system’s admin console (JavaScript and HTML5).
    • Worked on DevOps to enable the code to run on Linux instances on the AWS cloud (S3, EC2, RDS, and Spot Market).
    • Designed the software architecture in Java to ensure that all the pieces interacted smoothly.
    Technologies: Java, PostgreSQL, NLP, ML, Amazon (EC2, RDS, Spot Market)
  • Lead Scientist
    2011 - 2014
    • Worked as the primary developer of a big data audience analysis system.
    • Programmed Hadoop, using native Java SDK, to process big data from real-time ad exchanges.
    • Developed a system to connect the Hadoop output to a machine learning algorithm.
    • Built a visualization/analysis back-end in MySQL to enable clients to understand the audience profile and characteristics.
    • Integrated the audience analysis system with other components of the company's stack (the bidder system and the operations console).
    • Wrote additional significant ETL code in Java for the company's reporting system.
    Technologies: Java, Hadoop, MySQL, Amazon (EC2, EMR, S3), Machine Learning
  • Software Developer
    Rodale Press (Contract)
    2009 - 2010
    • Developed SmartCoach and SmartCoachPlus—an automated training program generator for runners.
    • Programmed the initial version in JavaScript, the second version primarily in Java/JSP.
    • Developed a MySQL back-end for a second version.
    • Implemented complex training program generation rules.
    Technologies: Java, JavaScript
  • Flow Diagram (Development)

    To use this tool, a developer first writes an algorithm or software process using a special set of conventions. Then the user can automatically extract a visual diagram describing the algorithm.

    The diagram is very useful for documentation purposes; other developers (or the original developer, at a later point in time) can easily understand the way the code works just by looking at the diagram, without needing to dive into the specific details.

  • Ozora Research Sentence Parser (Development)

    At Ozora Research, I built a broad grammar sentence parser without using labeled training data (almost all other work in the area of parsing depends on labeled "treebank" data).

    The parser is built in combination with a specialized text compressor which compresses text by using a parse tree. The parser produces the tree that will produce the smallest code length for the given sentence. You can demo the parser at the link provided.

  • Notes on a New Philosophy of Empirical Science (Other amazing things)

    This is a book that I wrote about a new approach to empirical science based on lossless data compression. In this philosophy, a researcher proposes a theory, builds the theory into a data compressor, and measures the quality of the theory by invoking the compressor on a large shared data set. If the theory achieves a lower net code length (including the size of the compressor itself) than previous theories, it is confirmed as the new "champion" theory.

    This philosophy guided my work at Ozora Research. In this case, the relevant data set was English newspaper text. To compress this data, I developed theories of grammar and syntax, and build those theories into a data compressor.

  • Statistical Modeling as a Search for Randomness Deficiencies | Ph.D. Thesis (Other amazing things)

    My Ph.D. thesis developed an approach to statistical modeling based on the search for randomness deficiencies in an encoded form of the data.

    According to algorithmic information theory, if a given model is a perfect fit for a data set, then when you encode the data using the model, the resulting encoded data (typically a bit string) is completely random. This implies that if you have a model—and encode the data using the model and find a randomness deficiency in the encoded data—then there is a flaw in your model. Furthermore, an analysis of the randomness deficiency illustrates a way to improve the model.

    The thesis developed a suite of machine learning algorithms that work by using this idea.

  • Languages
    Java, SQL, JavaScript, Python, XML
  • Other
    Natural Language Processing (NLP), Machine Learning (ML)
  • Frameworks
  • Paradigms
    Object-oriented Design (OOD)
  • Platforms
    Linux, AWS EC2, JEE
  • Storage
    AWS S3, PostgreSQL, MySQL, JSON, AWS RDS
  • Libraries/APIs
    React.js, TensorFlow
  • Ph.D. in Machine Learning
    University of Tokyo - Tokyo, Japan
    2006 - 2010
  • Master of Science in Artificial Intelligence
    McGill University - Montreal, Canada
    2004 - 2006
  • Master of Science in Physics
    University of Connecticut - Storrs, CT, USA
    2002 - 2004
  • Bachelor of Arts in Applied Math and Computer Science
    Harvard University - Cambridge, MA, USA
    1995 - 1999
Hire the top 3% of freelance developers
I really like this profile
Share it with others