Topics: AI | Data Science | Data Visualization | Deep Learning | Machine Learning | NLP | Python | R | Statistics

About Ben Dickson

Ben is a software engineer and the founder of TechTalks. He writes about technology, business and politics.

Ben Dickson Posts (4)

  • Why machine learning struggles with causality - 08 Apr 2021
    If there's one thing people know how to do, and that's guess what caused something else to happen. Usually these guesses are good, especially when making a visual observation of something in the physical world. AI continues to wrestle with such inference of causality, and fundamental challenges must be overcome before we can have "intuitive" machine learning.
  • Silver BlogDeep learning doesn’t need to be a black box - 05 Feb 2021
    The cultural perception of AI is often suspect because of the described challenges in knowing why a deep neural network makes its predictions. So, researchers try to crack open this "black box" after a network is trained to correlate results with inputs. But, what if the goal of explainability could be designed into the network's architecture -- before the model is trained and without reducing its predictive power? Maybe the box could stay open from the beginning.
  • Machine learning adversarial attacks are a ticking time bomb - 29 Jan 2021
    Software developers and cyber security experts have long fought the good fight against vulnerabilities in code to defend against hackers. A new, subtle approach to maliciously targeting machine learning models has been a recent hot topic in research, but its statistical nature makes it difficult to find and patch these so-called adversarial attacks. Such threats in the real-world are becoming imminent as the adoption of machine learning spreads, and a systematic defense must be implemented.
  • Doing the impossible? Machine learning with less than one example - 09 Nov 2020
    Machine learning algorithms are notoriously known for needing data, a lot of data -- the more data the better. But, much research has gone into developing new methods that need fewer examples to train a model, such as "few-shot" or "one-shot" learning that require only a handful or a few as one example for effective learning. Now, this lower boundary on training examples is being taken to the next extreme.

Sign Up

By subscribing you accept KDnuggets Privacy Policy