StMU Research Scholars

Featuring Scholarly Research, Writing, and Media at St. Mary's University

To create this AI-assisted newscast set in the aftermath of the Battle of Gettysburg, I followed a structured approach that leveraged multiple AI tools to produce an authentic and journalistic presentation.

  1. Script Development: I began by using ChatGPT to draft the script, instructing it to include a clear news broadcast format. This included an introduction, an outro, and interviews with both a Union and a Confederate soldier. I requested a tone that would suit a news segment, avoiding a documentary or summary style.
  2. Historical Accuracy and Tone Refinement: Once the initial script was complete, I input it into Claude AI to review for any historical inaccuracies. I also asked Claude to enhance the journalistic tone, ensuring that the language remained objective and informative.
  3. Image Generation: With the script finalized, I identified the key scenes that would require visual support. Using ChatGPT’s DALL-E as a primary tool, I generated the initial set of images, and I completed the visual content using Stable Diffusion. This approach allowed me to create diverse, scene-appropriate images.
  4. Voice Generation: For the characters’ voices, I utilized Play.HT and ElevenLabs, assigning each character a unique voice to enrich the realism of the newscast.
  5. Video Creation: To animate the characters, I employed Luma AI to sync the generated voices with subtle mouth movements, enhancing the overall immersion.
  6. Intro Music: For a professional touch, I used Suno AI to create an introductory music track reminiscent of a traditional news channel theme.
  7. Editing and Assembly: Finally, I brought all elements together in DaVinci Resolve video editing software. I arranged the audio files on the timeline first, ensuring accurate timing for each visual element. For dialogue scenes, I inserted the AI-generated videos of speaking characters. For the battle scenes, where AI video generation posed challenges, I opted to use Stable Diffusion images with smooth transitions and zoom effects via keyframes to create dynamic, engaging visuals.

This multi-step process allowed for a seamless blending of AI-generated scripts, visuals, and audio, resulting in the cohesive and compelling historical newscast you see here.

Robert Reynolds

Author Portfolio Page

Recent Comments

Leave the first comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.