Results matching “ATC”

看一下卫星通信中的ACARS

十年前的老文章 漫谈个人接受ACARS信号的方法 里介绍过接收VHF的ACARS信号,
卫星通信(SATCOM)中的ACARS又是什么样,稍微查了一下资料。

首先看马来西亚航空MH377失联事件的Wikipedia页面
与卫星通信的信息如下:


飞行时间01:30 Inmarsat-3 F1卫星接收到七个握手信号中的第一个
飞行时间07:30 Inmarsat-3 F1卫星捕捉到飞机ACARS发出的最后一次完整握手信号,代表此时还在飞行
飞行时间07:38 计划外,透过飞机发出的无法解释的"部分握手"信号,在作业程序中须预留的抵达后余存燃油应所剩无几
飞行时间08:34 Inmarsat预定下次握手时间未收到飞机答复

Inmarsat即国际海事卫星组织, 同样可以通过Wikipedia看到Inmarsat提供的卫星服务

Inmarsat-3 F1卫星的涵盖范围为印度洋地区, 发射日期为1996年4月3日,已经在太空中运行了28年。

支 持 本 站 运 营    

12.1.0的详细说明

What's new in X-Plane 12.1.0


正如大家所知,X-Plane团队正在以前所未有的速度增长。
其中一个结果是更新内容更多、涵盖X-Plane体验更多方面的大型更新,而12.1.0也不例外!

最初是一个以图形为重点的版本,我们的团队还实现了新的飞机系统、飞行模型改进、天气、ATC系统改进、对X-Plane专业版的增强,以及一种基于物理的摄像机,让你感觉就像真的随着飞机一起移动。
这个版本目前正在内部测试中,我们期待着尽快将其作为公共测试版发布。

以下是你可以期待的内容:

支 持 本 站 运 营    

X-Plane 12.06的大量新功能

X-Plane 12.06 Is Full of Many Things
内容还是靠ChatGPT的英中翻译完成的。

升级以后测试了一下,

我对新的云层还是非常满意的, 显示更接近于真实的云,而且性能不错,桢频毫无下降。

云和天气
自从X-Plane 12.0发布以来,我们一直在致力于改进云和天气系统的性能、准确性和质量。12.06版本实现了此多步骤过程的前两个阶段:

云层着色器现在更快,并且减少了伪影。丹尼尔重新编写了云层推进方式,修复了斑马条纹问题,并且总体上使画面不那么像素化和丑陋。
云层着色器还包含了用于卷云的专用路径,这应该比我们在12.0版本中拥有的卷层云看起来更好("高空非常薄的层状云")。
我和亚历克斯重建了构建每种天气类型的噪声函数,以获得外观更好的各种云层。

虽然其中包括了一些真实天气的修复,但我们并没有全面更新真实天气;我的想法是,如果没有适当的渲染,我们将无法确定真实天气是否真的有所改进。

即将推出的功能:在测试版2中,"Minecraft风格的云"(例如方块状的立方云,尤其是在真实天气下)将会被修复,所以在还能使用它们之前尽情享受。厚重的棱柱状卷云也将被修复,我们将优化预设和METAR解析。

未来展望:我们计划在3D云层后面添加一个2D的"云层外壳",以处理轨道视图,并使地球看起来不那么奇怪;而且我们将会对真实天气进行详尽的检查和优化。

支 持 本 站 运 营    

X-Plane12 A330 POH 4

Glareshield Panel(遮阳板面板)

Annunciator Panel(告警面板)
MASTER WARNING(主警告):用于表示3级严重程度警告的亮起并闪烁。机组必须取消告警。
MASTER CAUTION(主注意):用于表示2级严重程度注意的亮起。机组必须取消告警。
AUTO LAND(自动着陆):如果飞机正在进行无法完成的自动着陆,此警告灯会亮起。机组必须接管飞机的控制权并取消告警。
SIDE STICK PRIORITY(侧杆优先):如果由于两名飞行员同时进行侧杆输入而产生冲突,此灯将亮起。

支 持 本 站 运 营    

Memo: chatgpt api helloworld

MacOS install

curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py python3 get-pip.py /Library/Developer/CommandLineTools/usr/bin/python3 -m pip install --upgrade pip

echo "export PATH=\$PATH:~/Library/Python/3.9/bin" > > ~/.zshrc
echo "export OPENAI_ORGANIZATION=org-xxxxx" > > ~/.zshrc
echo "export OPENAI_API_KEY=sk-yyyyy" > > ~/.zshrc
source ~/.zshrc

pip install openai
pip install urllib3==1.26.6

Usage
API Keys
Playground
Examples
Models
Price
Prompt engineering techniques

InstructGPT
This family of models can understand and generate a lot of natural language tasks. It includes several models: text-ada-001, text-babbage-001, text-curie-001, and text-DaVinci-003. Ada is only capable of simple completion tasks but is also the fastest and cheapest model in the GPT-3 series. Both babbage and curie are a little more powerful but also more expensive. DaVinci can perform all completion tasks with excellent quality, but it is also the most expensive model in the family of GPT-3 models.

ChatGPT
The model behind ChatGPT is gpt-3.5-turbo. It is a chat model; as such, it can take a series of messages as input and return an appropriately generated message as output. While the chat format of gpt-3.5-turbo is designed to facilitate multi-turn conversations, it is also possible to use it for single-turn tasks without conversation. In single-turn tasks, the performance of gpt-3.5-turbo is comparable to text-DaVinci-003, and since gpt-3.5-turbo is one-tenth the price, with more or less equivalent performance, it is recommended to use it by default also for single-turn tasks.

messages:
system message helps set the behavior of the assistant.
user messages are the equivalent of a user typing a question or sentence in the ChatGPT web interface. They can be generated by the users of the application or set as an instruction.
assistant messages have two roles: either store prior responses to continue the conversation or can be set as an instruction to give examples of desired behavior. Models do not have any memory of past requests, so storing prior messages is necessary to give context to the conversation and provide all relevant information.

Price
The DaVinci model is x10 times the cost of GPT-3.5 Turbo. We recommend using DaVinci only if you wish to do some fine-tuning.

Tips:
1. Thinking Step by Step
adding "Let's think step by step" to the prompt, the model has empirically proven itself capable of solving more complicated reasoning problems. This technique also called the Zero-shot-CoT strategy

2. Few-shot learning
refers to the ability of the large language model to generalize and produce valuable results with only a few examples in the prompt, Then, in the last line, we provide the prompt for which we want a completion.

3. One-shot learing
provide only one example to help the model to understand the task. The advantages of one-shot learning are simplicity, faster prompt generation, and lower computational cost.

4. prompt with context and task
"Context: xxxxxx"
"Task: yyyyyy" e.g. Grammar correction, Summarize for a 2nd grader, TL;DR summarization, Python to natural language, Calculate Time Complexity, Python bug fixer, SQL request, Analogy maker, Notes to summary

5. Fine-tuning VS Few-shot learing
Fine-tuning produces a highly specialized model that can provide more accurate and contextually relevant results for a given task.
Few-shot learning allows developers to quickly prototype and experiment with various tasks, making it a versatile and practical option for many use cases.

6. fine tuning
pip install pandas
openai tools fine_tunes.prepare_data -f test.jsonl

{"prompt": "", "completion": ""}
{"prompt": "", "completion": ""}
{"prompt": "", "completion": ""}
...

example:
{"prompt": "Review the following Python code: 'def sum(a, b): return a + b\nresult = sum(5, '5')'", "completion": "Type error: The 'sum' function is adding an integer and a string. Consider converting the string to an integer using int() before passing it to the function."}

% openai api fine_tunes.create -t "test_prepared.jsonl"

Upload progress: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 570/570 [00:00<00:00, 345kit/s]
Uploaded file from test_prepared.jsonl: file-VtyCiUlG7G5o5UiKmawq7bMN
Created fine-tune: ft-cMOy6aV3KwexUv9RmbBUfOwQ
Streaming events until fine-tuning is complete...

(Ctrl-C will interrupt the stream, but not cancel the fine-tune)
[2023-06-12 16:59:27] Created fine-tune: ft-cMOy6aV3KwexUv9RmbBUfOwQ

Stream interrupted (client disconnected).
To resume the stream, run:

openai api fine_tunes.follow -i ft-cMOy6aV3KwexUv9RmbBUfOwQ
% openai api fine_tunes.follow -i ft-cMOy6aV3KwexUv9RmbBUfOwQ
[2023-06-12 16:59:27] Created fine-tune: ft-cMOy6aV3KwexUv9RmbBUfOwQ
[2023-06-12 17:00:47] Fine-tune costs $0.00
[2023-06-12 17:00:47] Fine-tune enqueued. Queue number: 0
[2023-06-12 17:00:49] Fine-tune started
[2023-06-12 17:01:51] Completed epoch 1/4
[2023-06-12 17:01:52] Completed epoch 2/4
[2023-06-12 17:01:52] Completed epoch 3/4
[2023-06-12 17:01:53] Completed epoch 4/4
[2023-06-12 17:02:10] Uploaded model: curie:ft-personal-2023-06-12-08-02-10
[2023-06-12 17:02:11] Uploaded result file: file-kMj1j7ODxnwqzwbKwyL466Qe
[2023-06-12 17:02:11] Fine-tune succeeded

Job complete! Status: succeeded 🎉
Try out your fine-tuned model:

openai api completions.create -m curie:ft-personal-2023-06-12-08-02-10 -p < YOUR_PROMPT >


支 持 本 站 运 营    

New ATC Features in X-Plane 12

官网介绍xp12里的ATC新功能总结的不错,这里简单翻译一下。
New ATC Features in X-Plane 12

XP中的新功能
-6个不同地区(亚洲、澳大利亚、欧洲、印度、美国和全球)
-可选择男性和女性飞行员的声音
-根据信号接收强度与发射器的距离,以及相邻地形的变化而变化声音
-人工智能现在会等待控制员的指令,然后再执行指令

主要变化之一,也可能是最不引人注目的变化之一,
就是现在由一个全球区域系统,取代了以前单一的全球控制区域(老系统只模拟了美国特有的规则和信息)。
拥有独立的地理区域意味着世界不同地区可以使用适合当地的声音,但更重要的是各地区可以有不同的信息和标准。
为了配合口音的地区差异,系统现在支持每个地区一个以上的声音,提供更多的变化,以及男性和女性的声音。
在声音设置页面中,飞行员也可以选择男性/女性的声音。

支 持 本 站 运 营