About

I am a doctoral student in the Language Technologies Institute at Carnegie Mellon University where I am advised by Yonatan Bisk and Emma Strubell.

My research interests are in characterizing and evaluating the impact of machine learning workloads, with the goal of designing computationally and energy efficient systems for artificial intelligence. My research is generously supported by the NSF Graduate Research Fellowship.

Previously, I received my B.S. in Computer Science and B.S. in Electrical Engineering at Northwestern University where I worked with Doug Downey and Thrasos Pappas. I’ve also spent time at Google, Meta FAIR, and the Allen Institute for Artificial Intelligence.

Email: jaredfern [at] cmu.edu

Out-of-Date News:
$\qquad$ [April 2025] $\quad$ Presented joint work on low-power distributed training at SIGBOVIK!
$\qquad$ [Nov. 2024] $\quad$ Presented our paper on continual pretraining for LLMs at EMNLP!
$\qquad$ [June 2024] $\quad$ Interning at Meta FAIR studying the efficiency of distributed training!
$\qquad$ [Dec. 2023] $\quad$ Presented work on deep learning framework efficiency at EMNLP!

Selected Publications

Energy Considerations of Large Language Model Inference and Efficiency Optimizations

Under Review, 2025.

Hardware Scaling Trends and Diminishing Returns in Large-Scale Distributed Training

Under Review, 2025.

Holistically Evaluating the Environmental Impact of Creating Language Models

International Conference on Learning Representations (ICLR), 2025.

Gradient Localization Improves Lifelong Pretraining of Language Models

Findings of EMNLP, 2024

The Framework Tax: Disparities Between Inference Efficiency in Research and Deployment

Empirical Methods in Natural Language Processing (EMNLP), 2023.