Student Projects

About Us

Our team creates powerful supercomputers for modeling and analyzing the most complex problems and the largest data sets to enable revolutionary discoveries and capabilities.  Many of these capabilities have been developed and published in partnership with amazing students (see our Google Scholar page).

Here are a selection of videos describing some of our work:

Listed below are a wide range protential projects in AI, Mathematics, Green Computing, Supercomputing Systems, Online Learning, and Connection Science.  If you are interested in any of these projects, please send us email at supercloud@mit.edu (please avoid using ChatGPT or another LLM to write your email).

  • Predicting future training needs
    The ability to provide necessary documentation and training for our researchers requires that we understand the suite of applications, workflows and software tools currently used and are able to develop insight into future trends in applications, workflow development and software tool selection. To do this we need to collect and analyze data from jobs run on the LLSC-SuperCloud systems, researcher help requests, the educational platform and the user database.
    Current projects in this area include determining the necessary data required to provide inside on our training and research support needs and developing a data set. For example, in order to identify education and training gaps and design a prioritized suite of new examples, we need a clear picture of who are users are, what their applications require, how often they use the system, how much of the system they use and what their usage pattern looks like over time.
  • Evaluating Training Effectiveness
    The ETO team is interested in using data driven processes to evaluate the effectiveness of our education and training modules. This research effort is related to evaluating the impact of informal training on the researcher’s HPC understanding and growth. Using data from our courses and researcher use of the supercomputing, do researchers use the supercomputing system effectively? Have they aligned their workflow with the one of the canonical HPC workflows, are they requesting the proper system resources and are they using all that they have requested. For more information see: