Generative Security Applications

Instructor: Wu-chang Feng
Class: EB 325, T 4:40pm-8:30pm
Office hours: time
Contact and discussion:
Resources
Course Description
Generative AI and Large Language Models (LLMs) are upending the practice of cybersecurity and have the potential to automate away many of the manual, time-consuming tasks in the field. This course explores the range of Generative AI systems that are available and examines their utility in a range of common cybersecurity tasks such as vulnerability discovery, reverse-engineering, threat intelligence analysis, code generation, command generation, configuration generation, phishing, and social engineering. Each week, students will utilize a variety of LLMs and LLM agents towards automatically addressing common problems in cybersecurity. Note that, the class consists of a large amount of in class exercises and presentations done each week. Attendance and class presentations will be mandatory.

Schedule

Week Topic Assignments
1 Course overview, Accounts Setup, Using LLMs, Programming with LLMs Labs
2 LangChain Tour (Basics, Chains, RAG) Labs
3 LangChain Tour (Agents) Labs
4 Securing generative applications Labs
5 Code summarization and reverse engineering Labs
6 Vulnerabilities and exploitation Labs
7 Command and configuration generation Labs
8 Code generation Labs
9 Threat intelligence Labs
10 Social engineering
Finals week Final project Due Fri. @ 11:59pm

Assignments

Weekly presentations
Each week, students will be exploring the use of Generative AI in solving pre-defined problems in a particular category of cybersecurity, comparing the results from different models and services. From this, they will then attempt to extend the approach to solve additional problems of their choice in the category. Throughout the course, students will be summarizing and presenting their results in class.
Final project
Based on the exercises examined during class, students will perform a deeper dive in applying LLMs and generative AI towards solving cybersecurity problems. Details will be provided via the lab site.

Course objectives

  • Examine the range of generative artificial intelligence tools and large language models currently available.
  • Use generative AI models and services to solve cybersecurity problems such as vulnerability discovery, reverse-engineering, threat intelligence analysis, code generation, command generation, configuration generation, phishing, and social engineering.
  • Demonstrate and compare the capabilities and limitations of specific generative AI models towards addressing each class of cybersecurity problems.
  • Investigate potential security issues within applications utilizing large language models.

Policies

Grading
Group presentations of exercises 30%
Individual homework programs 40%
Final project 30%
Attendance
The vast majority of work will be done in class. Students unable to attend any particular class will be responsible for performing a screencast walkthrough of all exercises for partial credit.
Academic misconduct
  • Includes allowing another student to copy your work unless specifically allowed by the instructor.
  • Includes copying blocks of code from external sources without proper attribution
  • Results in a grade of 0 for the assignment or exam.
  • Results in the initiation of disciplinary action at the university level.