-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Combination of parameters to solve_from_centroids for image that will solve the fastest #34
Comments
I don't see an attached image? |
Sorry about that. They are too large to attach. 28MB each. I updated the OP with the link. thanks, |
https://github.com/smroid/cedar-solve is a fork of Tetra3 that I maintain. Together with cedar-detect, it was able to solve your image in less than 0.4s (175ms for centroiding and 159ms for plate solving, on a Raspberry Pi 4). It looks like there might have been some camera shake in that image, as the stars are all doubled up. The cedar-detect algorithm doesn't like the double star images, it rejected most of them. Also, it looks like the image is somewhat over-exposed. Try reducing the shutter time? That might also help with the shake issue. I didn't do anything special to deal with the diffraction spots. If you reduce the exposure they should be less prominent. |
Thank you very much! I'm very impressed! Could you please share with me the
algorithm for cedar-solve? Are there papers written on it? From the context
it sounds like it's a centroiding algorithm. There was wind shake on the
camera since it has a large baffle and I was purposely trying to test cloud
subtraction. What is your background subtraction algorithm? My centroiding
code is able to group the double stars so maybe we can collaborate. The
output image that I provided indicate the stars used for the solution.
…On Wed, Oct 30, 2024, 7:07 PM Steven Rosenthal ***@***.***> wrote:
https://github.com/smroid/cedar-solve is a fork of Tetra3 that I
maintain. Together with cedar-detect, it was able to solve your image in
less than 0.4s (175ms for centroiding and 159ms for plate solving, on a
Raspberry Pi 4).
It looks like there might have been some camera shake in that image, as
the stars are all doubled up. The cedar-detect algorithm doesn't like the
double star images, it rejected most of them.
Also, it looks like the image is somewhat over-exposed. Try reducing the
shutter time? That might also help with the shake issue.
I didn't do anything special to deal with the diffraction spots. If you
reduce the exposure they should be less prominent.
—
Reply to this email directly, view it on GitHub
<#34 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHZ3GGRFLG5FITZFRDMTK3TZ6G3KNAVCNFSM6AAAAABQ5QGFNSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINBZGAYTSMZTG4>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Oh also, what were your arguments supplied to tetra3?
…On Thu, Oct 31, 2024, 2:30 PM Windell Jones ***@***.***> wrote:
Thank you very much! I'm very impressed! Could you please share with me
the algorithm for cedar-solve? Are there papers written on it? From the
context it sounds like it's a centroiding algorithm. There was wind shake
on the camera since it has a large baffle and I was purposely trying to
test cloud subtraction. What is your background subtraction algorithm? My
centroiding code is able to group the double stars so maybe we can
collaborate. The output image that I provided indicate the stars used for
the solution.
On Wed, Oct 30, 2024, 7:07 PM Steven Rosenthal ***@***.***>
wrote:
> https://github.com/smroid/cedar-solve is a fork of Tetra3 that I
> maintain. Together with cedar-detect, it was able to solve your image in
> less than 0.4s (175ms for centroiding and 159ms for plate solving, on a
> Raspberry Pi 4).
>
> It looks like there might have been some camera shake in that image, as
> the stars are all doubled up. The cedar-detect algorithm doesn't like the
> double star images, it rejected most of them.
>
> Also, it looks like the image is somewhat over-exposed. Try reducing the
> shutter time? That might also help with the shake issue.
>
> I didn't do anything special to deal with the diffraction spots. If you
> reduce the exposure they should be less prominent.
>
> —
> Reply to this email directly, view it on GitHub
> <#34 (comment)>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHZ3GGRFLG5FITZFRDMTK3TZ6G3KNAVCNFSM6AAAAABQ5QGFNSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINBZGAYTSMZTG4>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
|
No papers on Cedar-detect, but you can look at the code at https://github.com/smroid/cedar-detect/blob/main/src/algorithm.rs, it is pretty extensively commented. Can you describe your imaging setup? What camera sensor, lens, and what exposure time for that image? What is causing the diffraction artifacts? Regarding my tetra3 invocation-- it is at https://github.com/smroid/cedar-solve/blob/master/examples/test_tetra3.py, so: fov_estimate: None |
Thank you very much for the info! I will look at the algorithm.
The camera is a ASI294 monochrome camera coupled with a electronic focuser
from Beiger Engineering, and a Sigma f/105 lens. Over the lens aperature is
a 091 red filter to filter out sky scatter as much as possible. In front of
the lens is an EMI mesh for RF attenuation and this is where the scatter is
coming from. The computer is an SBC from VersaLogic called the Owl, it has
a 2GHz Apollo Processor. Onboard storage is 8TB for full res images and a
512 MB card for downsampled images. Baffle in front is a 3ft long with 7
vanes. FOV is 10.7 x 8 deg.
…On Thu, Oct 31, 2024 at 3:24 PM Steven Rosenthal ***@***.***> wrote:
No papers on Cedar-detect, but you can look at the code at
https://github.com/smroid/cedar-detect/blob/main/src/algorithm.rs, it is
pretty extensively commented.
Can you describe your imaging setup? What camera sensor, lens, and what
exposure time for that image?
Regarding my tetra3 invocation-- it is at
https://github.com/smroid/cedar-solve/blob/master/examples/test_tetra3.py,
so:
fov_estimate: None
match_radius: 0.01
match_threshold: 1e-4
pattern_checking_stars: no longer used in cedar-solve's fork of tetra3
match_max_error: 0.002
—
Reply to this email directly, view it on GitHub
<#34 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHZ3GGXD3TZT5QCYOFFOKTTZ6LJ3JAVCNFSM6AAAAABQ5QGFNSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJRGEYTSOBWGU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
What was the exposure time for the image you posted? |
Exposure time 100ms, focus was done manually instead of autofocus.
On Thu, Oct 31, 2024 at 3:58 PM Windell Jones ***@***.***>
wrote:
… Thank you very much for the info! I will look at the algorithm.
The camera is a ASI294 monochrome camera coupled with a electronic focuser
from Beiger Engineering, and a Sigma f/105 lens. Over the lens aperature is
a 091 red filter to filter out sky scatter as much as possible. In front of
the lens is an EMI mesh for RF attenuation and this is where the scatter is
coming from. The computer is an SBC from VersaLogic called the Owl, it has
a 2GHz Apollo Processor. Onboard storage is 8TB for full res images and a
512 MB card for downsampled images. Baffle in front is a 3ft long with 7
vanes. FOV is 10.7 x 8 deg.
On Thu, Oct 31, 2024 at 3:24 PM Steven Rosenthal ***@***.***>
wrote:
> No papers on Cedar-detect, but you can look at the code at
> https://github.com/smroid/cedar-detect/blob/main/src/algorithm.rs, it is
> pretty extensively commented.
>
> Can you describe your imaging setup? What camera sensor, lens, and what
> exposure time for that image?
>
> Regarding my tetra3 invocation-- it is at
> https://github.com/smroid/cedar-solve/blob/master/examples/test_tetra3.py,
> so:
>
> fov_estimate: None
> match_radius: 0.01
> match_threshold: 1e-4
> pattern_checking_stars: no longer used in cedar-solve's fork of tetra3
> match_max_error: 0.002
>
> —
> Reply to this email directly, view it on GitHub
> <#34 (comment)>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHZ3GGXD3TZT5QCYOFFOKTTZ6LJ3JAVCNFSM6AAAAABQ5QGFNSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJRGEYTSOBWGU>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
|
The exposure time was 100 ms
…On Thu, Oct 31, 2024 at 4:02 PM Steven Rosenthal ***@***.***> wrote:
What was the exposure time for the image you posted?
—
Reply to this email directly, view it on GitHub
<#34 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHZ3GGXLCD7A45WAFFU5QZLZ6LOMTAVCNFSM6AAAAABQ5QGFNSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJRGE2TEMRWG4>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Aloha,
I have an example image (google link below) and I would like to find a set of parameters that will solve the image the fastest. I have my own variable background subtraction and diffraction spike rejection algorithm (im happy to share) and it takes 0.5 seconds to compute a list of centroids from a monochrome png image, but I'm wondering if any of you are able to solve the attached picture natively without modification to the code and what parameters have you chosen to make it solve the fastest. My solver takes 1.5 seconds. I know that this is system dependent, but I wanted to just give an idea of the solve time.
Here are some givens:
fov_estimate:=10.78 deg, fov_max_error=0.25, pattern_checking_stars=15, match_radius= 0.01 (left at default), match_threshold=1e-3(left at default)
Here are my thoughts (please correct me if im wrong about what I say):
I may be incorrect about some of my statements above. I only mean to throw my assumptions out there. Please correct me if I'm off!
Here is the google link:
https://drive.google.com/drive/folders/1GNJ93yn-rDqaa0cDSLCgazjSVZR1yXyx?usp=sharing
The text was updated successfully, but these errors were encountered: