Machine learning and data analytics continue to expand the fourth industrial revolution and affect many aspects of our lives. The first part of the talk will explore hardware accelerator architectures for deep learning applications. I will talk about our recent work on Perm-DNN based on permuted-diagonal interconnections in deep convolutional neural networks and how structured sparsity can reduce energy consumption associated with memory access in these systems. I will then talk about reducing latency and memory access in accelerator architectures for training by gradient interleaving using systolic arrays. In the second part of the talk, I will describe machine learning applications in data-driven neuroscience applications and their low-energy implementations. I will talk about the use of machine learning to find biomarkers for epilepsy using electroencephalogram (EEG). I will talk about approaches for energy-efficient implementations and about the roles of feature ranking and incremental-precision approaches to reduce energy consumption.