Helping Airport & Aerodrome Operators Develop Powerful Operational & Safety Systems
The Runway Centreline Blog
Inspiration for these posts can come from strange places. I’ve just returned from Canberra and while coming back through their airport, I noticed something that I wouldn’t immediately call art but after a moment, I changed my mind and declared it, to myself, the best piece of airport artwork I’ve ever seen.
Today is day 31, and the #blogathon has come to an end. The inspiration for this (mis)adventure was the release of ChatGPT, and, as a test, I thought I’d see if it could make my life easier. But, unfortunately, the short answer to that question is no.
I have spent a sizeable portion of this #blogathon discussing the power and capability of ChatGPT critically. I have used, and I assume, many other people use it directly and personally. One-on-one, so to speak. Through this approach, I have built some interesting things over the last two months - Bird Strike Bot and a fast-time simulation. However, I still enjoy the much less predictable but often fruitful world of human interaction, whether it’s an active social media life or traditional methods like networking and formal education.
Header image: Helena Lopes (via Pexels)
Please note that Twitter killed the bot and then I deleted the account.
While in yesterday’s introduction of the Bird Strike Bot, I was proud of the work I undertook, with the help of ChatGPT, to build and deploy a Twitter bot, I still think it is worth taking a critical look at its first couple of weeks to see how well it is performing.
Short answer: 😐
Header image: Pavel Danilyuk (via Pexels)
Much of my excitement associated with ChatGPT came from my early experimentation and the “success” of our first actual project. I’ve posted a few times now about Python programming, and in December, it helped me take it to a new level.
‘Cause, we built a bot. And then Twitter killed it and then I deleted its account.
Header image: Tara Winstead (via Pexels)
I’ve ignored ChatGPT over the last couple of days as I dived into familiar topics. And given it’s dubious results when trying to be funny, I thought best not to repeat the sins of the last few Fridays. Instead, today I have asked ChatGPT to acknowledge the final days of the #blogathon with a poem in the style of Edgar Allan Poe.
Header image: Tom Mossholder (via Pexels)
There are heaps of aviation movies, but very few are about or set primarily in an airport. In fact, I can only think of three. So what am I missing, and which one is the best?
Australia’s standards provide a good framework for managing the risks associated with aerodrome works, but they require significant and comprehensive digestion to understand them. This month, I’ve already touched on the challenges related to writing standards, and this topic showcases the style of regulation that sets some boundaries and parameters within which an aerodrome operator is expected to build a process.
In today’s post, I would like to share how I used to train aerodrome work safety officers (WSOs) on the management of aerodrome works.
Not all aviation incidents are accidents, but they are all learning opportunities. This serious incident involving a larger private aircraft at a relatively quiet regional aerodrome builds on some of the lessons from yesterday’s post and helps me build towards tomorrow’s discussion on Australian aerodrome works safety standards. So, let’s discuss the day a Merlin took off from a closed runway at Gunnedah.
An aerodrome is a hazardous environment: lots of moving parts, competing objectives, humans being human, weather, etc. When we need to conduct airside works, we introduce even more hazards and more risk. And this requires a specific set of management activities. One of the worst aviation accidents involving aerodrome works was the 2000 crash involving Singapore Airlines Flight 006 (SQ006) at Taiwan Taoyuan International Airport.
The story behind this accident and its aftermath is complex but let’s look at it from an aerodrome works safety management point of view.
I have only just touched the surface of this concept, and I am excited by its power, but it is time to wrap up this series on fast-time simulation. I’ve created a flight schedule, built an airport and subjected it to almost three years of punishment. Today’s post is a quick look at visualisation of the results and what decisions it could support.
Last weekend, with the help of ChatGPT, I started learning how to do fast-time simulation in Python. Modelling operations in Excel can be fun, but I wanted to step up my game. And since I don’t have access to any full-featured simulation platforms, I thought I would teach myself how to code this stuff.
This process started with background research and creating a schedule I could use for my simulations. Then I built an airport, its apron management processes and a way of tracking the apron’s performance. This weekend, I am playing God.
We’re back with ChatGPT helping us write absolutely absurd inspirational speeches for some of our favourite airport facilities. If that sounds ridiculous, you’re right. If that sounds funny, you’re probably weird 😜.
Human factors is a rich field of research and application. It encompasses how our bodies work, our minds think, and how we interact with each other and interface with tools, equipment and technology. It is absolutely ridiculous to think one can reduce human factors to mere factoids.
But it’s day 19 of this blogathon, and I need to write something firmly within my wheelhouse.
Aerodrome certification seems like both a mature system and an emerging concept. I acknowledge that while it has been in place in states like Australia for twenty years, some states are still working through the necessary regulatory development and implementation. Regardless, I’d like to go out on a limb and suggest that the concept is underdone. It takes an overly simplistic view of aerodromes and, as such, can hamper innovation and development.
Every written safety policy has the potential to be something great and something genuinely disappointing. I have worked at and with airports where the safety policy was the foundation of good decision-making. I touchstone that anchored logical and just decision making. Unfortunately, I have also seen them treated with absolute disdain, sometimes by the senior manager who signed it. And while artificial intelligence like ChatGPT isn’t going to fix that, it might reset the relationship between its words and the safety manager tasked with writing it.
Over ten years ago, I posted a short recommendation that aerodrome operators should consider the risk of a NOTAM system failure. While even I may have considered such a thing a bit of a black swan event, this scenario recently played out in the United States and Canada. Obviously, plenty of people in the FAA and NAV Canada will be working on avoiding a repeat of these events, aerodrome operators should take the time to review want happened for their own lessons learned.
It’s Sunday night, and while I haven’t quite finished with my first foray into fast-time simulation, I am enjoying the process. Today, I set about “building” an airport for my simulation. To make sure it worked, I ran a simulation of one day’s flight schedule. ChatGPT has been here to help me, but, as I will discuss here and in the days following, it hasn’t been as big of a help as I had hoped.
Fast-time simulation is a topic that has always teased my interest. I’ve enjoyed building models of some pretty complex systems in Excel, but I’ve pushed that platform to the limit and making the jump up to more sophisticated modelling has always seemed like too big of a leap. It seemed the realm of hardcore coders or professional platforms that don’t have a “play around for free” option.
With ChatGPT on my side, I decided to give it a go this weekend. So, come along with me as I either leap to new heights or fall into an abyss.
Monday’s post, on SWA1248, was the only one where I had started writing before I started this silly blogathon thing. It wasn’t completely written but I had kicked it off early in December with the strong idea that ChatGPT was going to help me write it. It didn’t work out that well.
Today, I want to do a quick review of that experience to keep the levels of excitement around the AI revolution in check.
Please don’t get me wrong, AI is coming and it’s coming fast. I saw a tweet the other day that said (in effect), “AI isn’t going take your job but someone better at using AI will.”
As you can imagine, our discussion so far has leaned towards the airport side of the GRF story. So, for today, I thought I would share a couple of videos that discuss the GRF from a pilot’s point of view.
Complete alignment with ICAO is, generally, a good policy to have. Their standards and recommended practices (as well as guidance material) are developed in a thoughtful and considerate way. This process is slow and methodical (perhaps frustratingly so at times). Experts from around the world participate, often in their own time and in addition to their day job, with further support from the ICAO Secretariat. For my part, I enjoy being a part of this process.
But what happens with the standards that don’t quite gel with the operational environment in your state?
But what happens with the standards don’t quite gel with the operational environment in your state?
In my day job, I've been working on Australian standards for implementing the Global Reporting Format (GRF). Unfortunately, as with many advances in aviation, this was a change brought about by an accident. While it was likely that there were many influential accidents and incidents, I want to analyse the critical inciting event in this post.
Southwest Airlines Flight (SWA) 1248 was miraculous in that everyone on the aircraft survived. But it was also tragic with the death of a child not even at the airport. And it triggered a lot of action by the Federal Aviation Administration (FAA), the International Civil Aviation Organisation, and many other civil aviation authorities.
Just under four years ago, I wrote a primer on “Urban Air Mobility.” I had the best intentions in developing a series of articles on vertiport design concepts and standards. But I never did.
At the time, there was practically no data on aircraft performance nor any indication from regulators regarding how they would manage these new facilities. And I had no idea I would be part of a crack team developing Australian vertiport design, operations and certification standards. But here were are.
In this post, let’s take a moment to review the current state of play.
On the heels of my post on ChatGPT summarising incident reports, I wanted to highlight another summary workflow that is gaining traction, as well as a recent online webinar that included me. In early December, my CASA colleagues Joe Hain, Liam Smith and I held an introductory webinar on the draft Advisory Circular that just went out for consultation.
But maybe you were too busy to attend, and perhaps you’re still too busy to sit there and watch the video.
If this is the case, here comes ChatGPT to the rescue.
Header image: Judit Peter (via Pexels):
As a generative natural language model, ChatGPT is good at writing. And thanks to its colossal training data set (something like the whole Internet up to September 2021), it already knows a lot of stories. Moreover, it is capable of writing in a multitude of different styles.
So, for a quick “Friday funny” post, please enjoy Shakespeare’s Romeo & Juliet re-imagined as an airport safety love story (no tragedy in this one).
Having mentioned this great new technology a few times, perhaps I should talk about what ChatGPT could do for you. Well, at it’s core, ChatGPT reads and it writes and that got me thinking about similar tasks I’ve had to do in the past. Reading incident reports and summarising them for my bosses came to mind almost at once. So, let’s test it out.
Compared to the Arctic Blast that ripped through North America a couple of weeks ago, my experience with airport winter operations has been fairly limited. And while a little snow and ice can be treacherous, the amount of frozen water that was dumped over there was downright deadly.
And about 100 more articles can be found here
TAGS
- Wildlife Management
- Videos
- Risk Assessment
- Risk
- ChatGPT
- Safety Regulation
- Blogathon
- Funny
- Safety Hazard
- Safety Management
- Aircraft Accident
- Accident Investigation
- Airport Wildlife Risk Management
- Risk Evaluation
- Aerodrome Standards
- Probability-Impact Graph
- Risk Matrix
- Training
- Risk Analysis
- Airport Safety Week
- Culture
- Court Decisions
- Likelihood & Consequence
- Risk Management
- Safety Management Systems
- Runway Safety
- Accident Analysis
- NotAnotherCOVID19post
- Aerodrome Manual
- Safety Assurance
- Accountability
From time to time, I like to write about wildlife-strike-related research. In my wrap-up of last September’s AAWHG Forum, I hinted at a presentation I delivered on wildlife strike costs in Australia and promised that more details were coming soon. Well, today is soon! A couple of nights ago, my first ever peer-reviewed academic journal article was published*, and it has the very scientific-sounding title of “Estimating the Cost of Wildlife Strikes in Australian Aviation Using Random Forest Modeling.”