os/fs/mqueue: Fix mq inode initialization locking#7199
Merged
ewoodev merged 1 commit intoSamsung:masterfrom Mar 24, 2026
Merged
os/fs/mqueue: Fix mq inode initialization locking#7199ewoodev merged 1 commit intoSamsung:masterfrom
ewoodev merged 1 commit intoSamsung:masterfrom
Conversation
os/fs/mqueue/mq_open.c
Outdated
| @@ -133,6 +134,7 @@ | |||
Contributor
There was a problem hiding this comment.
Can you update comment for that sched_lock is not sufficient for SMP environment.
| mqdes = mq_descreate(NULL, msgq, oflags); | ||
| if (!mqdes) { | ||
| errcode = ENOMEM; | ||
| goto errout_with_msgq; |
Contributor
There was a problem hiding this comment.
We neet do call sem_give() for error case.
Also in line 203
| */ | ||
|
|
||
| sched_lock(); | ||
| flags = enter_critical_section(); |
Contributor
There was a problem hiding this comment.
Are both sched_lock and enter_critical_section necessary?
If it is for synchronization, enter_critical_section alone might be sufficient.
Description: This patch ensures that inode initialization and mq binding with the initialzed inode in processed under a single lock. This prevents stale initialisation of inodes and consequent errors. ----------------------------------------------------------------------------------- Problem: We encountered a case where two processes (sender and receiver) wanted to communicate via a private mq. The environment had SMP enabled. Both had the permission to create the mq via the OCREAT flag. Now, the process which has the sched_lock() goes to blocked state while waiting for a semaphore during initialisation of the mq inode. This leads to partial initialisation of the inode and context switch. Now, when the other process accesses the same partially initialised inode, it returns error (err no 6, code ENXIO: inode exists but not a mq) And if we do not introduce critical_section(), then after the context switch, both the processes go on to create the inode and errno 17 (EEXIST) is returned from inode_reserve(). ----------------------------------------------------------------------------------- Logs: sched_addreadytorun: new task scheduled with pid = 32, name = csifw_sample, cpu = 0 mq_open: entering sched_lock: pid = 32, cpu = 0, path => /var/mqueue/tm_public_mq mq_open: leaving sched_lock: pid = 32, cpu = 0 sched_addreadytorun: new task scheduled with pid = 17, name = task_manager, cpu = 1 mq_open: entering sched_lock: pid = 32, cpu = 0, path => /var/mqueue/tm_priv_mq32 up_block_task: task_state = 6 set by pid = 32, name = csifw_sample, cpu = 0 up_block_task: task_state = 6 set by pid = 17, name = task_manager, cpu = 1 sched_addreadytorun: new task scheduled with pid = 32, name = csifw_sample, cpu = 0 up_block_task: task_state = 6 set by pid = 32, name = csifw_sample, cpu = 0 sched_addreadytorun: new task scheduled with pid = 17, name = task_manager, cpu = 0 mq_open: entering sched_lock: pid = 17, cpu = 0, path => /var/mqueue/tm_priv_mq32 mq_open: leaving sched_lock: pid = 17, cpu = 0 taskmgr_send_response: mq_open failed!, errno = 6, cpu = 0 sched_addreadytorun: new task scheduled with pid = 32, name = csifw_sample, cpu = 1 mq_open: leaving sched_lock: pid = 32, cpu = 1 up_block_task: task_state = 9 set by pid = 32, name = csifw_sample, cpu = 1 up_block_task: task_state = 9 set by pid = 17, name = task_manager, cpu = 0 ----------------------------------------------------------------------------------- Logs (without critical_section): sched_addreadytorun: new task scheduled with pid = 17, name = task_manager, cpu = 1 mmqq__oopepne:n :e netneterriinngg sscchehde_dl_loocckk:: ppiidd == 3172,, ccppuu == 01,, ppaatthh ==>> //vvaarr//mmqquueeuuee//ttmm__pprriivv__mmqq3322 up_block_task: TSTATE_WAIT_SEM set by pid = 32, name = csifw_sample, cpu = 0 up_block_task: TSTATE_WAIT_SEM set by pid = 17, name = task_manager, cpu = 1 sched_addreadytorun: new task scheduled with pid = 32, name = csifw_sample, cpu = 0 up_block_task: TSTATE_WAIT_SEM set by pid = 32, name = csifw_sample, cpu = 0 sched_addreadytorun: new task scheduled with pid = 17, name = task_manager, cpu = 0 mq_open: leaving sched_lock: pid = 17, cpu = 0 up_block_task: TSTATE_WAIT_SEM set by pid = 17, name = task_manager, cpu = 0 sched_addreadytorun: new task scheduled with pid = 32, name = csifw_sample, cpu = 0 mq_open: leaving sched_lock: pid = 32, cpu = 0 taskmgr_receive_response: mq_open failed!, errno = 17, cpu = 0 up_block_task: TSTATE_WAIT_SEM set by pid = 32, name = csifw_sample, cpu = 0 ----------------------------------------------------------------------------------- Fix: We release the inode semaphore after the mq has been bound to the inode and take critical_section after taking the sched_lock() ----------------------------------------------------------------------------------- Signed-Off By: Rishabh Singh <ris.singh@samsung.com>
seokhun-eom24
approved these changes
Mar 23, 2026
Contributor
|
LGTM~ |
ewoodev
approved these changes
Mar 24, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description:
This patch ensures that inode initialization and mq binding with the initialzed inode in processed under a single lock. This prevents stale initialisation of inodes and
consequent errors.
Problem:
We encountered a case where two processes (sender and receiver) wanted to communicate via a private mq. The environment had SMP enabled. Both had the permission to create the mq via the OCREAT flag.
Now, the process which has the sched_lock() goes to blocked state while waiting for a semaphore during initialisation of the mq inode. This leads to partial initialisation of the inode and context switch.
Now, when the other process accesses the same partially initialised inode, it returns error (err no 6, code ENXIO: inode exists but not a mq)
And if we do not introduce critical_section(), then after the context switch, both the processes go on to create the inode and errno 17 (EEXIST) is returned from inode_reserve().
Logs:
sched_addreadytorun: new task scheduled with pid = 32, name = csifw_sample, cpu = 0 mq_open: entering sched_lock: pid = 32, cpu = 0, path => /var/mqueue/tm_public_mq mq_open: leaving sched_lock: pid = 32, cpu = 0
sched_addreadytorun: new task scheduled with pid = 17, name = task_manager, cpu = 1 mq_open: entering sched_lock: pid = 32, cpu = 0, path => /var/mqueue/tm_priv_mq32 up_block_task: task_state = 6 set by pid = 32, name = csifw_sample, cpu = 0 up_block_task: task_state = 6 set by pid = 17, name = task_manager, cpu = 1 sched_addreadytorun: new task scheduled with pid = 32, name = csifw_sample, cpu = 0 up_block_task: task_state = 6 set by pid = 32, name = csifw_sample, cpu = 0 sched_addreadytorun: new task scheduled with pid = 17, name = task_manager, cpu = 0 mq_open: entering sched_lock: pid = 17, cpu = 0, path => /var/mqueue/tm_priv_mq32 mq_open: leaving sched_lock: pid = 17, cpu = 0
taskmgr_send_response: mq_open failed!, errno = 6, cpu = 0 sched_addreadytorun: new task scheduled with pid = 32, name = csifw_sample, cpu = 1 mq_open: leaving sched_lock: pid = 32, cpu = 1
up_block_task: task_state = 9 set by pid = 32, name = csifw_sample, cpu = 1 up_block_task: task_state = 9 set by pid = 17, name = task_manager, cpu = 0
Logs (without critical_section):
sched_addreadytorun: new task scheduled with pid = 17, name = task_manager, cpu = 1 mmqq__oopepne:n :e netneterriinngg sscchehde_dl_loocckk:: ppiidd == 3172,, ccppuu
== 01,, ppaatthh ==>> //vvaarr//mmqquueeuuee//ttmm__pprriivv__mmqq3322
up_block_task: TSTATE_WAIT_SEM set by pid = 32, name = csifw_sample, cpu = 0
up_block_task: TSTATE_WAIT_SEM set by pid = 17, name = task_manager, cpu = 1
sched_addreadytorun: new task scheduled with pid = 32, name = csifw_sample, cpu = 0
up_block_task: TSTATE_WAIT_SEM set by pid = 32, name = csifw_sample, cpu = 0
sched_addreadytorun: new task scheduled with pid = 17, name = task_manager, cpu = 0
mq_open: leaving sched_lock: pid = 17, cpu = 0
up_block_task: TSTATE_WAIT_SEM set by pid = 17, name = task_manager, cpu = 0
sched_addreadytorun: new task scheduled with pid = 32, name = csifw_sample, cpu = 0
mq_open: leaving sched_lock: pid = 32, cpu = 0
taskmgr_receive_response: mq_open failed!, errno = 17, cpu = 0
up_block_task: TSTATE_WAIT_SEM set by pid = 32, name = csifw_sample, cpu = 0
Fix:
We release the inode semaphore after the mq has been bound to the inode and take critical_section after taking the sched_lock()
Signed-Off By: Rishabh Singh ris.singh@samsung.com