Jump to content
Why are we here ..... ×

jstill

Full Members
  • Posts

    11
  • Joined

  • Last visited

  • Days Won

    2

jstill last won the day on 29 September 2020

jstill had the most liked content!

Profile Information

  • Gender
    Male
  • Location
    Lowestoft / Norwich
  • Company Name
    Alan Boswell Group

jstill's Achievements

Rookie

Rookie (2/14)

  • First Post
  • Collaborator
  • Conversation Starter
  • Week One Done
  • One Month Later

Recent Badges

4

Reputation

1

Community Answers

  1. Hi @Mark Sollis, We're going with getting a custom dev done for the deletion of access history records. Out account manager Phil Brown has done a good job getting the cost down. Just waiting on sign off internally right now. Fingers crossed that'll solve the problem for us. At least we can do full refreshes without it taking forever Hope you're all staying safe these days
  2. Just wanted to say thanks to all for the feedback. One thing I have found with ICP - do not change a keyword name unless you're planning to do a complete refresh afterwards - every time you edit on of the records you'll get ICP errors in the logs and the ic_view will not update (the ICP table is ok). We've also found that sometimes new fields being added do not get pushed to ICP at all (we're raising a call about that now). Kind regards James
  3. Hi @Mark Sollis @karl, Thanks for the replies, to take the points in order: 1. Occasionally data doesn't go into IC tables. There's a blip in the connection or maybe there's a SQL error (usually from a trigger (which we no longer use)) which rolls the change back. A data refresh will push any missing records from OpenGI into SQL. I have also found a few instances of ghost records, where records are in SQL but not in OpenGI and the data refresh doesn't fix them. Presumably the check is only one way - I can't really see anything reading each row in SQL and checking OpenGI for it. So to get rid of those, I think we're going to have to reinstate a regular full bulk refresh. I don't actually know how long this will take with ICP, but hopefully not the 12 hours it was taking with regular IC towards the end. 2. There isn't one at the moment. We've been quoted 3 days for investigation / spec and 8-11 days dev cost to build it, which seems excessive given the relative simplicity of what's required. 3. Purging is a big problem for us. We're in discussions on the GDPR front now as the functions in OpenGI aren't really any good to us. In a nutshell - you have to purge transactions / documents for specific policies based on term date, but we don't want to do that. We want to delete all clients where their last policy expired over 7 years ago. However we want polices that relate to still live clients to remain, as well as any EL policies. We also want to keep prospects that are being worked on (we use Core for all clients for a bunch of reasons), so we add an extra requirement that the ToB data on the client must be over 3 years old as well. There's another issue in deleting documents that you can't delete a one off letter without confirming them all first (which means that you can no longer read the content of them), which means you can't delete any policies / clients with a one off letter on them. Ideally I'd like to be able to mark clients and policies for deletion and when we press purge it just deletes all records relating to those references, transactions, documents, access history and so on. Then we could all set our own flags based on our personal criteria, via DB enquiry, or via X-Stream. But for now... purging is out for us. Fair point you make though. With regards to the file taking 12 hours to process, I'm pretty certain it's because it's doing a data integrity check on the 140 million rows as it's being constantly written to by ICP. The contention on the disks must be crazy. @karl What bulk import process are you referring to? It can't be the full database refresh in that time, can it? Kind regards James
  4. Hi all - been a while since I last posted but have a bit of an issue and I wondered if anyone else has come across a solution. At the moment it's looking like an expensive custom build. In short the access history is huge - we made the jump from IC to ICP a few months ago so we would no longer need a 12 hour full refresh and could just do the routine 'data refresh' rather than full bulk transfer. In a nutshell: · Our access history has over 141 million records in it. · A data refresh (not full bulk export) in ICP started at 9:15 am does not finish the same day. · A data refresh done at 03:00 am does complete, but takes 19.5 hours. · A few days ago I was able to see the BAH file (access hist) was being processed at 8:50am, which is one of the last files processed in the refresh – it looks like this took 12 hours plus to process. · During the refresh, queries get locked and there are delays for info going into ICP – I have seen delays of over 450 seconds shown on the monitor. · Obviously the full bulk refresh takes a lot longer due to this (as it did with original Infocenter). · For the meantime, I’ve changed the data refresh to only run on a Sunday so it doesn’t impact the business. The fix would be to remove a lot of the older records from the access history. As I understand it, the tables in ICP replicate the structure found in OpenGI. As such all we really want is a once off process that deletes records from OpenGI’s equivalent of icp_braccesshist. Getting rid of records prior to 01/01/2015 would approximately halve the data we have, but I’d be inclined to get rid of everything prior to 01/01/2017, which would lose over 90 million records. In short we just need the OpenGI equivalent of the below. delete from icp_braccesshist where [#date] < '20170101' Has anyone else had this issue and/or found a solution? I don't want to disabled the access hist export as we use the data for a few things. Kind regards James
×
×
  • Create New...