-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Printf mcs mutex #574
base: master
Are you sure you want to change the base?
Printf mcs mutex #574
Conversation
9c0a69d
to
b5faa01
Compare
is this one ready to go, max? |
extern int __bsg_pod_x; //The X Cord of the tile's pod (lives in DRAM) | ||
extern int __bsg_pod_y; //The Y cord of the tile's pod (lives in DRAM) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could grab these values from cfg_pod CSR at the beginning. There is no need to have the host set these.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess that's true. It save's a trivial few instruction words in the initialization to have the host set it. That CSR resets to the current pod but it can be set by the core to something else (like if it wants to read another pod's DRAM).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm honestly not 100% sure which is better. I think it's better to have the host set it? It's just a couple extra packets in the nbf and we don't need to entangle it with the semantics of the pod csr. But I'm not sure, I'm open to be talked down from that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it makes sense to keep these variable in DRAM, because if you change the pod x/y CSR then now you are reading those values from different pods' DRAM space. I think it's better off to make global x/y readable by CSR instructions (read-only), then it doesn't take up space in neither DRAM or DMEM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that makes a lot of sense. That'll also decouple these values from the semantics of the pod address register.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it necessary to access the X and Y coordinate of the pod? As a general principle we would like to avoid code that depends on this, since it is a virtualization hazard...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a global lock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes why do we need a global lock across pods..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we don't? If each pod has its own IO node then there's no need.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the moment, there's only one IO node shared across pods. A global lock prevents the IO node from being slammed with packets from all pods.
b5faa01
to
2cb035b
Compare
Seems okay though? With individual pod locks, the throughput is already
throttled to some extent?
…On Wed, Sep 29, 2021 at 11:56 AM mrutt92 ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In software/bsg_manycore_lib/bsg_tile_config_vars.h
<#574 (comment)>
:
> +extern int __bsg_pod_x; //The X Cord of the tile's pod (lives in DRAM)
+extern int __bsg_pod_y; //The Y cord of the tile's pod (lives in DRAM)
At the moment, there's only one IO node shared across pods. A global lock
prevents the IO node from being slammed with packets from all pods.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#574 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEFG5AFMCSU3ZEU5CKRUTITUENONDANCNFSM5DSUZDRQ>
.
|
My thought is that the lock is presumably there because of lack of
re-entrancy in some library; which would not apply for multipod, since they
have their own DRAM spaces. Or is the lock there to reduce interspersing of
characters or ... ?
On Wed, Sep 29, 2021 at 12:05 PM Michael Nguyen Taylor <
***@***.***> wrote:
… Seems okay though? With individual pod locks, the throughput is already
throttled to some extent?
On Wed, Sep 29, 2021 at 11:56 AM mrutt92 ***@***.***> wrote:
> ***@***.**** commented on this pull request.
> ------------------------------
>
> In software/bsg_manycore_lib/bsg_tile_config_vars.h
> <#574 (comment)>
> :
>
> > +extern int __bsg_pod_x; //The X Cord of the tile's pod (lives in DRAM)
> +extern int __bsg_pod_y; //The Y cord of the tile's pod (lives in DRAM)
>
> At the moment, there's only one IO node shared across pods. A global lock
> prevents the IO node from being slammed with packets from all pods.
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <#574 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEFG5AFMCSU3ZEU5CKRUTITUENONDANCNFSM5DSUZDRQ>
> .
>
|
It prevents one IO node from being slammed and it also serves as keeping the messages from being garbled. |
No description provided.