Generative Security Applications

Instructor: Wu-chang Feng
Class: EB 325, T 4:40pm-8:30pm
Office hours: time
Contact and discussion:
Resources
Course Description
Generative AI and Large Language Models (LLMs) are upending the practice of cybersecurity and have the potential to automate away many of the manual, time-consuming tasks in the field. This course explores the range of Generative AI systems that are available and examines their utility in a range of common cybersecurity tasks such as vulnerability discovery, reverse-engineering, threat intelligence analysis, code generation, command generation, configuration generation, phishing, and social engineering. Each week, students will utilize a variety of LLMs and LLM agents towards automatically addressing common problems in cybersecurity. Note that, the class consists of a large amount of in class exercises and presentations done each week. Attendance and class presentations will be mandatory.

Schedule

Week Topic Assignments
1 Course overview, Accounts Setup, Using LLMs, Programming with LLMs Labs
2 LangChain Tour (Basics, Chains, RAG) Labs
3 LangChain Tour (Agents) Labs
4 MCP Labs
5 Securing generative applications Labs
5 Code summarization and reverse engineering Labs
6 Vulnerabilities and exploitation Labs
7 Command and configuration generation Labs
8 Code generation Labs
9 Threat intelligence Labs
10 Social engineering
Finals week Final project Due Fri. @ 11:59pm

Assignments

Labs and notebook
Lab assignments will be given each class covering the course material. You will perform each one, while maintaining a lab notebook in a Google Doc that documents your progress via screenshots with your OdinID in them. The notebook should also include answers to any questions in the labs.  Notebooks should be exported as a PDF file and include a table of contents generated by Google Docs. Submission will be done via adding, committing and pushing the file to your private git repository. Use the following naming convention to submit your notebooks.
  • notebooks/Labs<labs_number>.pdf e.g. notebooks/Labs1.pdf
The notebook will be graded based upon the following rubric:
  • Neatness and organization
  • Completeness
  • Inclusion of OdinID or project identifier in screenshots
Homework screencasts
Each week, students will be exploring the use of Generative AI in solving pre-defined problems in a particular category of cybersecurity, comparing the results from different models and services. From this, they will then attempt to extend the approach to solve additional problems of their choice in the category. Throughout the course, students will be summarizing and presenting their results in class.
Final project
Based on the exercises examined during class, students will perform a deeper dive in applying LLMs and generative AI towards solving cybersecurity problems. Details will be provided via the lab site.

Course objectives

  • Test modern generative AI models across tasks.
  • Program applications that utilize generative AI models.
  • Exploit generative AI applications and secure them from attack.
  • Apply generative AI towards solving problems in cybersecurity such as vulnerability discovery, reverse-engineering, threat intelligence analysis, code generation, command generation, configuration generation, phishing, and social engineering.
  • Utilize emerging research in AI to solve security problems.

    Policies

    Grading
    Attendance and in-class exercises 15%
    Lab notebooks 30%
    Homework screencasts 40%
    Final project 25%
    Attendance and in-class exercises
    Attendance is required and will be taken each class. There will also be in-class exercises during class. One absence is allowed with no deduction regardless of the reason. You do not need to notify the instructor. Participation in the Slack channel is encouraged. You are expected to follow this code of conduct when communicating.
    Academic misconduct
    • Includes allowing another student to copy your work unless specifically allowed by the instructor.
    • Includes copying blocks of code from external sources without proper attribution
    • Results in a grade of 0 for the assignment or exam.
    • Results in the initiation of disciplinary action at the university level.