Generative AI Application Retrofit – Using an LLM to Generate Correspondence (S1:E5)

Movie in LinkedIn (S1:E5)

In today's update, I'm pleased to share our latest progress in streamlining correspondence within our application. We're focusing on two key areas: writing queries to publishers and effectively managing their responses.

Writing Queries: This task involves more than just drafting a letter. It's about succinctly summarizing essential information, adhering to specific formats, and understanding the diverse requirements of different publishers.

Managing Responses: The challenge here is to address various outcomes – whether a submission is pending, accepted, or rejected. My approach is to be direct yet courteous, maintaining a positive relationship with authors, even in rejection scenarios. I avoided the impersonal tone of form letters to ensure each response feels considered and respectful.

Traditionally, this process would heavily rely on human input, often supported by templates with predefined tags. However, my new approach uses generative AI that draws from the information provided. It generates professional, customized correspondence that aligns with these detailed requirements.

I am thrilled with the success of today’s solution as it discards the use of templates completely, and it provides a professional and highly tailored correspondence that meets these requirements. I believe the output should still be reviewed by a human before it is sent, but it greatly improves the productivity of the people tasked with handling this activity. While I recommend a final human review before sending, this new method significantly enhances the productivity.

Leveraging AI to Enhance Literary Communications

To have your stories accepted by a publisher, the significance of a well-crafted query cannot be overstated. A query essentially serves as a cover letter, often accompanied by a manuscript, and is the first point of contact with a publisher or literary agent. I used a two-step process.

The Author Crafting the Query

My initial step involved compiling a comprehensive dataset, including details about the author, the narrative, the publisher, and specific submission guidelines. The AI tool was tasked with integrating these varied data points into a professional and engaging query letter on behalf of the author.

The outcome was a tailored communication piece that not only adhered to the specific requirements of the publisher but also maintained a tone and structure befitting a professional literary submission.

The Publisher/Agent Response

The second phase of my experiment involved adopting the perspective of a literary agent or publisher responding to the query. The responses could vary - acceptance, rejection, or indicating that the submission was still under review. I crafted three distinct prompts, each aligned with a potential outcome.

Through this process, the adaptability and precision of ChatGPT-4 (an upgrade from the previous 3.5 version) became evident, offering more nuanced and contextually appropriate responses.

Observations and Insights

Several key observations emerged during this experiment:

  • When considering different submission formats, the AI tool adeptly adjusted its output. For instance, in the postal mail format, it included the current date (provided by me, as ChatGPT does not access real-time data) and correctly positioned the publisher's address.
  • Adhering to the publisher's guidelines, ChatGPT intelligently included a note about the first 10 pages of the manuscript being part of the submission, showcasing its ability to incorporate specific guideline details.

Sample Query Output

As a result of this experiment, I was able to generate a series of professional, well-structured queries. Below is an example.

Sample Query

Enhancing Transparency in AI-Generated Content: A Case Study in Literary Publishing

In the rapidly evolving landscape of literary publishing, the integration of generative AI has become a pivotal, yet sensitive subject. To address this, I implemented an approach which ensures the transparent and ethical use of AI technology.

Prominent Display of AI Involvement

I prominently included the ChatGPT icon. This decision was made with the intent to clearly signify the involvement of generative AI in content creation. Given the nuanced nature of AI's role in the publishing industry, it is imperative to maintain an open and honest dialogue about its use. By incorporating the ChatGPT icon throughout the app, users are consistently reminded and made aware of the AI-generated content and its context within the app.

Tailoring Responses for Varied Publishing Outcomes

For the feature that generates responses from publishers or literary agents, I crafted distinct instructions for each potential outcome: acceptance, rejection, or pending consideration. This differentiation is crucial in mimicking the varied nature of real-world publishing scenarios.

Example of an AI-Generated Acceptance Response

To illustrate the application's capability, here is a sample response where the publisher accepts the submission for publication:

Sample Response

For the Accepted response, I included the content from the Publisher’s web site and the page for the submission guidelines. For the Pending and Rejected responses, I omitted this content as it would add little to the response. It would also greatly increase the token usage, adding to the cost and affecting performance.

Behind the Scenes

For implementing the author’s query, I wrote the function ‘OpenComposeQueryChatGPT' function. I assembled the prompt on lines 40 to 43. I used the structure of Instruction (prompt ID 96) followed by dynamic data from the database. The query prompt I used is as follows:

I am a writer working on submitting my story .... Please evaluate the following information pulled from the publisher’s web site regarding their publication. First, their home page that describes their publication. Second, ... to evaluate: 
[List of information from the database specific to this submission.]

Prompts for ChatGPT to Generate the Query

The code for the above prompt is shown in the function below in lines 33 to 37, which in turn is what I use to call the ChatGPT API.

Function ‘OpenComposeQueryChatGPT’
See lines 42 to 44

For implementing the publisher’s response, I wrote the function ‘OpenComposeResponse for AI’ function. This function opened the screen for composing the response and gathered information needed for use later. It did not interact with ChatGPT since I needed the user input on the overall response of accepted, rejected, or pending. I called the function ‘SwitchComposeResponsePrompts’ with the parameter indicating the type of response. This in turn calls ‘Send Synthetic Compose Response’ which builds the prompt for ChatGPT and calls the ChatGPT API.

Function ‘Send Synthetic Compose Response’

I assembled the prompt on lines 53. I used the structure of Instruction followed by dynamic data from the database. The response prompt I used for an Accepted submission is as follows:

I am a publisher or literary agent responding to a query by an author regarding a submission of their story. I would like to generate a response that is tailored to this author from myself as this publisher. Please evaluate ... ACCEPTED the story and would like to publish it. This should have an upbeat tone and be professional. Here is the information to evaluate:
[List of information from the database specific to this submission.]

Prompts for ChatGPT to Generate the Response (Accepted)

Lessons Learned

Integrating generative AI into the writing process has been both enlightening and transformative. This experience has reinforced the timeless adage of 'garbage in, garbage out', underscoring the importance of quality input. When provided with high-caliber data, generative AI demonstrates remarkable proficiency, especially in tasks such as creating correspondence.

Writing, traditionally a time-intensive task, is now being revolutionized by AI. Given its human-centric nature, the process of writing involves substantial labor. To enhance consistency and minimize errors in AI-generated content, I set the 'temperature' parameter of the API calls to zero. This adjustment aimed to reduce variability and limit the occurrence of 'hallucinations' or the generation of false information. While it's impossible to guarantee complete accuracy, the speed and efficiency with which ChatGPT processes and produces content are both compelling and, frankly, addictive.

Despite the advancements in AI, the responsibility for the final content remains unequivocally human. It is crucial to review and refine the AI-generated drafts, ensuring accuracy and relevance. As AI technology continues to evolve and errors become less frequent, there's a risk of diminishing scrutiny over its outputs. However, it's important to remember that editing a draft is often easier than crafting one from scratch, which makes the adoption of AI in writing not just practical but inevitable.

For my next post, I'll delve into the use of generative AI to extract structured information from blocks of text. For more insights and updates, don't forget to follow me here!


Really great series @dswatski ! I particularly liked your thoughts on navigation and the user specific summary on startup.

Have you considered how the FileMaker security model interfaces with your use of an LLM? For example, what if a user doesn’t have access to a layout that the LLM sends them to? Perhaps it’s as easy as adding some guard rails to your navigation script to ensure that the user has access to the destination layout prior to the GTL step. And maybe this is no different from traditional (non-LLM enhanced) navigation.

In any case, great job. I’m looking forward to following the rest of this journey.

Again, thanks for the input. For my application I was not concerned with whether the user had access to any of the screens except for one, and I simply added some code on the final navigation to ensure the user would be taken to the alternate screen that everyone has access to if they did not have access. Beyond that, the script is running as that user and I did not grant it admin access, so it should abide by the security model. Also, the table where I defined the target layouts specified the IDs for the target layout, which is only a small subset (about 17), of the total number of layouts I defined in the app. I also found that some of the commands applied to any screen that I was on, so for example if I asked it to display the tutorial, it would not leave the screen it was on. I think it is something to keep in mind that an add-on layer like this is not used to circumvent the security model.