Today, with Melissa as PIC and me as flight test engineer, we drove up and down Hwy 101 to take some data. Our goals were:
- Validate the test infrastructure -- servos, driver code, Python post-processing, etc.
- Get a super quick order of magnitude check on theoretical vs. actual data
We succeeded in both, which is great. However, do note as you look at the below that our data is utter dreck. It is the worst data in recorded history. It's like all of 2020 condensed into a CSV file and turned into our data. It's the kind of data you'd find at the bottom of a 40 year old outhouse, after....
Ok, you get the idea. Poor data is poor. The reason, we believe, is that we were taking data in the midst of traffic. That's fine. We don't claim to be doing science here. This is just a way to get some numbers so that, when we show Airball to someone, they don't look at us funny and suppress a laugh. We need our IAS (especially) and alpha / beta (why not?) numbers to look non-parodic.
First of all, here is the obligatory shot of the probe and its bandana flag, flapping bravely in the wind:
The entire ball o' wax -- data and Python -- is in the airball-data Git repo. First we plot the values of (dp0/q), (dpA/q), and (dpB/q). (These values are defined in a previous blog post.)
Although the data is really poor, you can see that the "shape" of the data sort of mimics what we would expect. With that in mind, we adopt a hypothesis that the data is equal to the theoretical values with a constant scaling factor applied. If we plot this scaling factor versus the sum of the squares of the errors between data and scaled theory, we get:
It appears that if we multiply the theoretical values of (dp0/q) by 0.5, and multiply the theoretical values of (dpA/q) and (dpB/q) by 0.7, we get the best fit.
We will plug this into the firmware, and proceed with construction and flight testing. Meanwhile we will see if we can get access to a wind tunnel, or persevere with our testing on the car. Stay tuned!
No comments :
Post a Comment