How ChatGPT is Changing the Game for App Development

Nicholas Ptacek
11 min readMar 2, 2023

--

As artificial intelligence (AI) continues to gain momentum in various industries, developers are constantly seeking ways to streamline their workflow and optimize their interactions with AI systems. And one of the latest innovations in this space is using ChatGPT to bootstrap app development.

In this writeup, we’ll explore how ChatGPT is revolutionizing app development by enabling developers to communicate more naturally with GPT-3 and quickly generate code snippets with greater accuracy.

𝘈 𝘩𝘶𝘮𝘢𝘯𝘰𝘪𝘥 𝘈𝘐 𝘢𝘯𝘴𝘸𝘦𝘳𝘪𝘯𝘨 𝘵𝘩𝘦 𝘵𝘦𝘭𝘦𝘱𝘩𝘰𝘯𝘦 [created with KREA Canvas]

As someone who works with AI on a daily basis, I’ve often found typing to be a bottleneck in my workflow, even with AI presets and playgrounds in place. That’s why I started exploring other methods of communication, immediately gravitating towards voice. However, I soon realized that dealing with unstructured data from external audio files was not ideal for my workflow.

Instead, I came up with a solution that involved in-line speech-to-text processing. By leveraging macOS’s capabilities, I could not only generate audio for GPT-3’s side of the conversation, I could also listen for speech in much the same way as Siri or Alexa. By extracting a text transcript on the fly, I could pass it directly on to GPT-3, eliminating the need for external audio files. To make the process even more efficient, I added an input text field and an output text field for GPT-3’s responses.

After testing this method in a controlled environment, I knew it was time to integrate the OpenAI API and work with live data from GPT-3. Although the API documentation didn’t provide example code in the language I wanted to use, this posed no problem as ChatGPT was more than capable of handling the task.

Throughout this writeup, I’ll break down my interactions with ChatGPT prompt-by-prompt to illustrate how each interaction guided the model towards the desired output.

Start by defining a goal and requesting help with a specific task

Providing one of OpenAI’s own examples helped start things off in the right context

In this case, I defined a goal by telling ChatGPT that we will be building an Objective-C app that can perform some simple Application Programming Interface (API) requests with GPT-3.

In one sentence, I specified multiple pieces of context:

  • An overall context/goal (building an app)
  • A programming language (Objective-C)
  • The app’s scope (perform simple API requests with GPT-3)

This is important to help guide the AI model towards the specific set of output we’re looking for right from the start. By providing a clear context on the how and what, ChatGPT will be much more likely to generate output in line with our expectations.

After setting the scene with context clues, it’s time to make a request to the AI. The way a request is worded has the single greatest effect on output, but is conditional upon the provided context. If no context is specified, outputs tend to be more generic (and oftentimes less useful), so you’re almost always going to want a sentence or two of context to help pave the way before making your actual request.

Continue providing context throughout the conversation

For the request, I continued providing deeper context to help guide the model’s output even further, essentially asking for a translation between two different programming languages (the curl terminal command and Objective-C)

  • curl to obj-c
  • native obj-c (vs third party, this is a constraint/narrowing of the output, which is good, otherwise it has more of a chance of recommending third party solutions which are much less likely to work across operating system versions)
  • asking it to show me how (higher chance of explainer-type response)
  • delineated the curl code snippet with ``` (which is used by OpenAI to indicate code blocks in systems like the GPT-3 playground), helping keep it separate, conceptually, from the rest of the prompt provided up until that point.

Please and Thank You

Throughout my conversation with ChatGPT, I made sure to thank it when it was generating the type of output I was looking for. This helps reinforce the context I originally supplied, guiding the model to continue generating usable code. When it generates something you don’t want, you can tell it that you were expecting something different (being as descriptive as possible), and it can often regenerate the code to be more like what you were originally looking for.

Note: Some people debate the necessity of including words like please and thank you in the course of our interactions with these AI models. My general philosophy is to try and be polite in my interactions with everybody I meet in life, and whether they’re human or AI doesn’t matter to me. Beyond the myriad debates about consciousness and agency that are sure to come regarding these AI models, I feel that time will prove these types of phrases to have a net positive effect on prompt output.

After crafting this initial prompt for ChatGPT, it generated the following code:

NSString *apiKey = @"YOUR_API_KEY";
NSString *model = @"text-davinci-003";
NSString *prompt = @"Say this is a test";
NSInteger temperature = 0;
NSInteger maxTokens = 7;

NSURL *url = [NSURL URLWithString:@"https://api.openai.com/v1/completions"];
NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:url];
[request setHTTPMethod:@"POST"];

NSDictionary *requestData = @{@"model": model,
@"prompt": prompt,
@"temperature": @(temperature),
@"max_tokens": @(maxTokens)};
NSData *jsonData = [NSJSONSerialization dataWithJSONObject:requestData options:0 error:nil];
[request setHTTPBody:jsonData];

[request addValue:@"application/json" forHTTPHeaderField:@"Content-Type"];
[request addValue:[NSString stringWithFormat:@"Bearer %@", apiKey] forHTTPHeaderField:@"Authorization"];

NSURLSession *session = [NSURLSession sharedSession];
NSURLSessionDataTask *task = [session dataTaskWithRequest:request completionHandler:^(NSData * _Nullable data, NSURLResponse * _Nullable response, NSError * _Nullable error) {
if (error) {
NSLog(@"Error: %@", error);
return;
}

NSHTTPURLResponse *httpResponse = (NSHTTPURLResponse *)response;
if (httpResponse.statusCode != 200) {
NSLog(@"HTTP Error: %ld", (long)httpResponse.statusCode);
return;
}

NSDictionary *responseData = [NSJSONSerialization JSONObjectWithData:data options:0 error:nil];
NSLog(@"Response Data: %@", responseData);
}];
[task resume];

From generating code to explaining it

ChatGPT was even so helpful as to explain what the code does! This was likely due to having requested that it show me how to do something.

This barebones code is generating console output for each API request, but if we want to have it displayed to the user in a sensible manner we’re going to need to parse the API response and show the appropriate output in the graphical user interface (GUI).

To do so, I first praised ChatGPT for a job well done, reinforcing that it’s generating the type of output I want. I then went ahead and provided my request to parse and display the output along with a copy of the response data I received back from the GPT-3 API.

Once again, ChatGPT had no problem generating the applicable Objective-C code and corresponding explanation of how it works.

NSDictionary *responseData = [NSJSONSerialization JSONObjectWithData:data options:0 error:nil];
NSArray *choices = responseData[@"choices"];
NSDictionary *choice = choices[0];
NSString *text = choice[@"text"];

dispatch_async(dispatch_get_main_queue(), ^{
self.textField.stringValue = text;
});

At this point I thanked the AI model once again, reinforcing that its output was what I was looking for. Now it’s time for me to set up the interface for the app. This part could be done programmatically, but doing so manually is fine for small tests like this.

What to do when you get stuck

After creating the corresponding user interface elements in Xcode’s Interface Builder, I ran into a problem. It’s been awhile since I worked with Objective-C, and my mind was blanking on the correct way to declare the references for UI elements. Luckily, ChatGPT was able to clear things up for me immediately.

@interface AppDelegate : NSObject <NSApplicationDelegate>

@property (nonatomic, weak) IBOutlet NSTextField *textField;

@end
As before, ChatGPT provided helpful explanations to go along with the code it generated

I continued praising ChatGPT for a job well done.

After testing that everything was working correctly in both the code and the UI, I decided that I wanted to add more functionality. Rather than simply sending the same hardcoded prompt to the GPT-3 API server each time, I wanted the chance to send arbitrary text as a prompt. Having done an excellent job thus far, I turned once more to ChatGPT:

In this prompt, I praised the AI model for the solid code it had generated so far, then explained what I was trying to do, and the steps I had taken so far. I also gave it instructions on what it shouldn’t do this time around (“Say this is a test”), clearly defining that this time around we are looking to do something different. This helps guide the model away from repeating the previous code verbatim, which isn’t what we want at this time.

ChatGPT then supplied two blocks of code in response, clearly showing which part goes in which source code file in the Xcode project:

@interface AppDelegate : NSObject <NSApplicationDelegate>

@property (nonatomic, weak) IBOutlet NSTextField *textField;
@property (nonatomic, weak) IBOutlet NSTextField *inputTextField;
@property (nonatomic, weak) IBOutlet NSButton *sendButton;

@end

ChatGPT also included a brief synopsis of what each part of the code does:

- (IBAction)sendButtonClicked:(id)sender {
NSString *apiKey = @"YOUR_API_KEY";
NSString *model = @"text-davinci-003";
NSString *prompt = self.inputTextField.stringValue;
NSInteger maxTokens = 1337;

NSURL *url = [NSURL URLWithString:@"https://api.openai.com/v1/completions"];
NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:url];
[request setHTTPMethod:@"POST"];

NSDictionary *requestData = @{@"model": model,
@"prompt": prompt,
@"max_tokens": @(maxTokens)};
NSData *jsonData = [NSJSONSerialization dataWithJSONObject:requestData options:0 error:nil];
[request setHTTPBody:jsonData];

[request addValue:@"application/json" forHTTPHeaderField:@"Content-Type"];
[request addValue:[NSString stringWithFormat:@"Bearer %@", apiKey] forHTTPHeaderField:@"Authorization"];

NSURLSession *session = [NSURLSession sharedSession];
NSURLSessionDataTask *task = [session dataTaskWithRequest:request completionHandler:^(NSData * _Nullable data, NSURLResponse * _Nullable response, NSError * _Nullable error) {
if (error) {
NSLog(@"Error: %@", error);
return;
}

NSHTTPURLResponse *httpResponse = (NSHTTPURLResponse *)response;
if (httpResponse.statusCode != 200) {
NSLog(@"HTTP Error: %ld", (long)httpResponse.statusCode);
return;
}

NSDictionary *responseData = [NSJSONSerialization JSONObjectWithData:data options:0 error:nil];
NSArray *choices = responseData[@"choices"];
NSDictionary *choice = choices[0];
NSString *text = choice[@"text"];

dispatch_async(dispatch_get_main_queue(), ^{
self.textField.stringValue = text;
});
}];
[task resume];
}

What to do when ChatGPT’s output ends prematurely

Now at this point ChatGPT had generated a really long response, and cut off mid-sentence. Thankfully, there’s an easy way to proceed from here: simply ask ChatGPT to keep going!

If ChatGPT cuts off mid-sentence, simply ask for it to keep going!

I reinforced that ChatGPT is being helpful, and let it know I was going to go test the code suggestions.

ChatGPT’s code worked flawlessly at this point, and I started making improvements of my own — namely, adding some simple speech synthesis code so I could hear GPT-3’s responses as spoken words instead of simply text on my computer screen.

I decided to let ChatGPT know:

ChatGPT can critique your code

What came next was both unexpected and appreciated. As I was just writing some quick and dirty test code, I didn’t both following proper coding conventions to check for edge cases. Luckily, ChatGPT had my back and immediately let me know about a potential issue with my code:

This is something that likely would have come up during unit testing, but ChatGPT was able to identify the issue incredibly early on in the design process, and even offered the code coverage needed to avoid this particular stumbling block.

dispatch_async(dispatch_get_main_queue(), ^{
self.textField.stringValue = text;
if (text && ![text isEqualToString:@""]) {
NSSpeechSynthesizer *speechSynth = [[NSSpeechSynthesizer alloc] initWithVoice:nil];
[speechSynth startSpeakingString:text];
}
});

And the icing on the cake is that ChatGPT explained how this code fix works in practice:

I ended the conversation with a final note of appreciation for ChatGPT, once again reinforcing that it had correctly generated the information I needed.

In the past, generating code snippets with AI has been hit or miss. While these systems can generally handle simple tasks, they often struggle with more complex ones, providing generic or half-baked answers. However, with ChatGPT, I was able to provide more information and context, allowing the system to generate flawless code on the first try.

Key takeaways:

  • You’re almost always going to want a sentence or two of context to help pave the way before making your actual request.
  • Be sure to let the AI model know when it’s doing a good job and generating the type of output you’re looking for.
  • Be as precise as possible throughout your conversation with ChatGPT, continually reinforcing the task at hand by providing additional context as necessary.
  • ChatGPT is capable of self-documenting the code it generates, and can help explain what each part does.
  • If you get stuck, try asking ChatGPT for help.
  • If you have existing code, ChatGPT can offer critiques and best practices.

By leveraging ChatGPT’s capabilities, app development can become more efficient and natural, streamlining interactions with AI and ultimately enhancing the development process.

Related Links:

https://twitter.com/nptacek/status/1601519073585922050

Nicholas Ptacek is a seasoned cybersecurity expert and accomplished writer with almost two decades of experience in the industry. Throughout their career, Nicholas has built award-winning computer security software and contributed to various print and news media outlets, including CNNMoney, Macworld, and MacDirectory magazine. They have also made appearances in publications like The Information and Vice.

In recent years, Nicholas has been exploring the intersection of art and technology, documenting the AI landscape and experimenting with generative AI models such as ChatGPT, GPT-3, Stable Diffusion, and DALL-E. Their work in this field has been covered extensively and featured in exhibitions across the globe, including the Artist x AI 000003 exhibition curated by Superchief Gallery NFT and Claire Silver at NFT Paris.

If you want to stay up-to-date with Nicholas’s AI experiments, you can follow them on Twitter at: @nptacek

This is essay 2 of 4 for TNS Creators’ 3rd Cohort

--

--