

Kaity has worked her magic and implemented the fix. Search is working now
Kaity has worked her magic and implemented the fix. Search is working now
Basically, after my healing was done, it just never got any better. No matter how much I dilated, the largest dilator was never comfortable, and with the effort it took to use that, nothing made out of flesh and blood instead of rigid plastic was going to stand a chance.
Because it was never comfortable, and I was never able to have penetrative sex, I ended up just giving up on dilation during the covid lockdowns.
I had a very different experience unfortunately. It turns out that I had quite a bit of internal scarring, so dilation was never pleasant for me. It wasn’t hard to do, but it didn’t feel comfortable. Sort of like stretching a piercing. It was tense and uncomfortable.
Still, despite that, it was a life changing experience, and I’d do it again every time if I had the choice!
Well, the last update seems to have cleared the queue, and all of my history from that 10 year import now shows, with trips and places identified!
But now, it’s having issues with importing the new google format import. I’ve got a 34MB file there that goes back to 2017, and this data says that it has imported, but then never appears in my history.
If it’s relevant, there is overlap in the data, as my 10 year takeout import went up to 2023, and my “new format” import starts in 2017 and went a couple of days ago. I changed my google account in 2017, but logged in to both on my phone simultaneously, so I was accruing location data on both accounts at the same time for a while before I turned it off on my old account.
Yep, we have the fix, and will be putting it in place ASAP
That’s really interesting. Australian here, and I’ve remarked several times how the userbase of the fediverse isn’t dominated by American voices like most other social media platforms I’ve used.
Since I last commented, the queue has jumped from about 9000 outstanding items, to 15,000 outstanding items, and it appears that I have timelines for a large amount of my history now.
However, the estimated time is still slowly creeping up (though only by a minute or two, despite adding 6000 more items to the queue).
I haven’t uploaded anything manually that might have triggered the change in queue size.
Is there any external calls made during processing this queue that might be adding latency?
tl;dr - something is definitely happening
Ok, so it may not be frozen. The numbers in the queue seem to imply it is, however, timelines and places are slowly filling out in my history. A couple of dates I had looked at previously were showing me tracklogs for the day, but not timeline information, and now, they’re showing timelines for the day
The domain you linked isn’t a public domain. It’s only visible within your local network
I was also trying to set up GPSLogger whilst it was crunching through the backlog, and I manually transferred a file from that app before I had autologging configured. Not sure if that could have done it?
The times don’t overlap, as the takeout file is only up until 2023
i7-8700 with 64GB of RAM
It’s a 1gig json file that has about 10 years of data. I get multiple repeats of the rabbit timeout in the logs. The Job Status section tells me that it’s got just under 9 hours of processing remaining for just over 16,000 in the stay-detection-queue. The numbers change slightly, so something is happening, but it’s been going for over 12 hours now, and the time remaining is slowly going up, not down.
reitti-1 | 2025-07-04T03:06:17.848Z WARN 1 --- [ntContainer#2-1] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it
reitti-1 |
reitti-1 | com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - delivery acknowledgement on channel 9 timed out. Timeout value used: 1800000 ms. This timeout value can be configured, see consumers doc guide to learn more, class-id=0, method-id=0)
reitti-1 | at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.checkShutdown(BlockingQueueConsumer.java:493) ~[spring-rabbit-3.2.5.jar!/:3.2.5]
reitti-1 | at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.nextMessage(BlockingQueueConsumer.java:554) ~[spring-rabbit-3.2.5.jar!/:3.2.5]
reitti-1 | at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doReceiveAndExecute(SimpleMessageListenerContainer.java:1046) ~[spring-rabbit-3.2.5.jar!/:3.2.5]
reitti-1 | at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.receiveAndExecute(SimpleMessageListenerContainer.java:1021) ~[spring-rabbit-3.2.5.jar!/:3.2.5]
reitti-1 | at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.mainLoop(SimpleMessageListenerContainer.java:1423) ~[spring-rabbit-3.2.5.jar!/:3.2.5]
reitti-1 | at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1324) ~[spring-rabbit-3.2.5.jar!/:3.2.5]
reitti-1 | at java.base/java.lang.Thread.run(Unknown Source) ~[na:na]
reitti-1 | Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - delivery acknowledgement on channel 9 timed out. Timeout value used: 1800000 ms. This timeout value can be configured, see consumers doc guide to learn more, class-id=0, method-id=0)
reitti-1 | at com.rabbitmq.client.impl.ChannelN.asyncShutdown(ChannelN.java:528) ~[amqp-client-5.25.0.jar!/:5.25.0]
reitti-1 | at com.rabbitmq.client.impl.ChannelN.processAsync(ChannelN.java:349) ~[amqp-client-5.25.0.jar!/:5.25.0]
reitti-1 | at com.rabbitmq.client.impl.AMQChannel.handleCompleteInboundCommand(AMQChannel.java:193) ~[amqp-client-5.25.0.jar!/:5.25.0]
reitti-1 | at com.rabbitmq.client.impl.AMQChannel.handleFrame(AMQChannel.java:125) ~[amqp-client-5.25.0.jar!/:5.25.0]
reitti-1 | at com.rabbitmq.client.impl.AMQConnection.readFrame(AMQConnection.java:761) ~[amqp-client-5.25.0.jar!/:5.25.0]
reitti-1 | at com.rabbitmq.client.impl.AMQConnection.access$400(AMQConnection.java:48) ~[amqp-client-5.25.0.jar!/:5.25.0]
reitti-1 | at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:688) ~[amqp-client-5.25.0.jar!/:5.25.0]
reitti-1 | ... 1 common frames omitted
I managed to break our instance. I imported several years worth of google takeout location data, and now the “stay-detection-queue” is stalled.
Minorities are outnumbered by definition. Putting minority rights up to majority vote leads to minorities getting fucked over…
HDR in a nutshell. But we have to get through it eventually right?
If this actually stands a chance of taking off, I’ll honestly take what I can get to normalise HDR images
HDR capable PNGs that don’t look shite on SDR displays? Sign me up!
Unless it’s schoolies!
I’m on android. I’ll raise a bug report.