iceScrum | iceScrum

iceScrum Forums Discuss on iceScrum

Forum Replies Created

Viewing 5 posts - 1 through 5 (of 5 total)

  • Author
    Posts

  • Kelvin-SG
    Participant

    We have applied the suggested fix.
    A few tests were done and it seems to work perfectly 🙂
    The new index is preventing the creation of two tasks with the same name within the same user story and it is now possible to move two tasks with the same name in the urgent/recurrent block. So this should be ok for our next sprint closure!

    For your information, please find below the SQL queries that were used:

    USE [ICESCRUM]
    GO

    /****** Object: Index [UQ__icescrum__177B8D8492ECA575] Script Date: 05/09/2016 14:10:27 ******/
    ALTER TABLE [dbo].[icescrum2_task] DROP CONSTRAINT [UQ__icescrum__177B8D8492ECA575]
    GO

    — creation index
    CREATE UNIQUE NONCLUSTERED INDEX UQ__icescrum_TaskName_Fix
    ON [ICESCRUM].[dbo].[icescrum2_task]([parent_story_id],[name])
    WHERE [parent_story_id] IS NOT NULL;

    commit;

    Thanks again !

    Kelvin


    Kelvin-SG
    Participant

    thank you very much for your reply and clear explanation.
    We will check the solution you provided and let you know ! By the way, we are using SQL Server 2008 R2, I don’t know if this behaviour would happen in more recent versions of it.

    In the meantime, what we have done is to have a SQL select query that is checking if potential duplicated tasks names exists between active stories and existing urgent or recurrent tasks.
    If so, then we update manually the name of the tasks in IceScrum.
    This is manageable as we currently have only 2 projects in parralel. But as you mentioned, this test needs to be done for the server level so it may become more difficult to manage in the case of many on going projects.

    Thanks again. We really appreciated the support you provided.

    Regards,

    Kelvin


    Kelvin-SG
    Participant

    Hello again,

    As mentionned previously, we had our sprint closure this morning.
    Prior to that, we have upgraded our IceScrum to R6#14.11 and I have applied your advices of managing tasks and stories one by one before closing the sprint.

    While doing so, we were able to found one problematic task and analyze the issue.
    In fact, the problem is encountered when we are moving some completed task of an incomplete user story to the completed urgent tasks block (as Icescrum is doing when shifting uncomplete User story).

    For one task, this was not successfull because of the same exeception :
    Violation of UNIQUE KEY constraint ‘UQ__icescrum__177B8D8492ECA575’. Cannot insert duplicate key in object ‘dbo.icescrum2_task’.

    Analyzing this DB constraint, we found that we already had in some previous sprint another urgent task with the same name. So this led to have 2 tasks with name and parent_story_id = null (as for all urgent tasks). In such case, the exception got triggered.

    In our case, those tasks corresponds to generic tasks that we have in many user stories such as “Test Case creation”, “Test Execution” or “SFD analysis”.
    That’s why we are encountering this issue when the user stories are not all completed.

    In your previous message, you’ve mentionned that this is linked to the way SQL server manages the ‘Null’ value within a constraint. I think that you also mentioned a possible workaround.
    Would it possible for you to give more explanation on this workaround?

    For our case today, we were finaly able to close our sprint after having updated the tasks name to make it unique. But it would be very helpfull to prevent this case for our next sprints.

    thanks and regards,

    Kelvin


    Kelvin-SG
    Participant

    Hi Again,

    Checking the groovy.config file, I found the log directory for IceScrum.
    Now I am able to find some errors on the sprints closure dates.
    For example :
    org.springframework.dao.DataIntegrityViolationException: could not update: [org.icescrum.core.domain.Task#359]; SQL [update icescrum2_task…

    Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Violation of UNIQUE KEY constraint ‘UQ__icescrum__177B8D8492ECA575’. Cannot insert duplicate key in object ‘dbo.icescrum2_task’. The duplic

    the log is pretty extensive. would it be helpfull to copy paste it over here?

    Thanks and regards,

    Kelvin


    Kelvin-SG
    Participant

    Hello Nicolas,

    Thank you very much for your reply.
    I will try to answer you point by point.

    1. Understanding the issue. I have checked in my tomcat folder in logs directory. The icescrum.log file has not been updated since the installation of Icescrum and the catalina.out or localhost do not contains any logs on the dates of the sprint closures (it contains other errors from time to time anyway).
    Is there another directory I should check to find logs?

    In terms of stories themselves, the only specific details I could think about was that they were initially having some dependencies defined. We had an error related to the depencies when closing the sprint 1 so we removed all depencies before closing it again.

    2. Fix the existing data. What we have done up to now was to recreate the missing stories (and task) that were needed to be switched to next sprint. But I have to admit that I have also tried manual DB update of some done user stories in order to have them displayed in the release plan.
    For example, I used such type of queries :

    update [ICESCRUM].[dbo].[icescrum2_story]
    set state = 7, rank = 1,parent_sprint_id= 60,done_date = ‘2016-08-05 06:00:00.000’
    where id = 74 and backlog_id = 57

    It works well to have the story appearing again in the release plan, but unfortunately (but as exepected), this did not refresh the sprint completed points, the related graphs and the taks that were contained within those stories.
    Do you think that I should revert those updates?

    To me it is important that the release plan reflects the actual progress on the project as it is the main view which we use in the project steering commitees.

    Avoid the problem in future sprints Thank you for those advices, we will apply them for sure. Our next sprint closure is on friday. So we’ll see how it goes. I’ll update this thread afterwards.
    If we encounter again those issue on friday, would you be interested to have a look at the db dump?

    Thanks again for your support.

    Kelvin

Viewing 5 posts - 1 through 5 (of 5 total)