For PCNT, it does not make a difference.
For FIN, I am surprised that you say you measured the values you did. FIN frequency measurement on the MC4x is based on counting the time between flanks instead of counting pulses. This method makes it more responsive than e.g. an XA2, but it has to come with a timeout, this is set at 200 ms , which is why the MC4x Appendix A information states 5 Hz as the lowest frequency when the pin is configured as FIN.
The reason is that when you have accumulated a high number of pulses, you may bump into an issue with how accurately the PCNT channel can represent its scaled value. The PCNT channel has value type Real, which means it can accurately represent up to 7 significant digits. It is reset on startup (assuming you don't use store value), but if you risk exceeding this before restarting the system, you should add a function for resetting it.
Yes, the JFIN channel is only reading PGN:s that fit in a single CAN frame.
To read a J1939 multipacket broadcast message, one would have to set up the JFIN channels to PGN numbers 60416 (Transport Protocol Connection Management) and 60160 (Data Transfer)
It is kind of tricky, as these same PGN:s are used for transferring all multipacket PGN:s adding JFIN:s with these PGN:s to the application will take priority over the built-in DM1/DM2 handling.
If you use FIN on an expansion module, e.g. XA2 or XC21, the expansion module CAN protocol limits the resolution to 1 Hz. Locally on the module the frequency is measured by counting flanks and it also applying a moving average filter, with a total time depth of 500 ms.
To get a more application specific balance between accuracy and response time, you could experiment with applying your own moving average filter to a PCNT based frequency calculation.
A limitation is the window size of the SFC channel moving average, but you could cascade multiple filters (which is essentially what happens when applying a moving average to e.g. an XA2 FIN)
See IQANdesign 5 solution library for a template:
What it means is 70 mA per VREF
Each VREF will have to supply its own set of sensors, they are not designed to be connected in parallel to drive a higher combined load.
One additional comment about the changes in log functionality that Marcus mentions is that in 2.x and 3.x, the logging could more easily miss events. Since version 4.00, the log event items are calculated as part of the application, and when a log event is triggered it is placed on the log queue.
Writing the events to the log flash memory takes a lot longer time than calculating the triggering conditions, and is done in a separate lower priority process. If there are too many events triggered before the IQAN master has time to record all, the System Information Channel Log queue will as Marcus mentioned reach 100%, indicating that only some of the events will be saved.
What do you have connected to the MD4 when this occurs? The picture looks like it is on the bench, but I am guessing there might be some CAN nodes connected to it also there.
I see two possible reasons.
If you have some of the other CAN nodes connected to it:Have you made some estimate on the amount of outgoing traffic on the CAN buses, to see if you have time to send the CAN frames with the transmit rate you set?
If the physical installation on CAN is correct but the MD4 tries to send too many messages, this results in a buildup of messages on queue, and when this reaches a certain level one of the checks related to the RTOS kicks in and stops the MD4 (safe state), the bluescreen is then shown with some debugging information.
For example, if you have a 250kbps bus and there are 15 full-length CAN frames sent with a 10 ms repetition rate, each CAN frame would need about 0.7 ms to transmit on the bus, and time it would take to send all would be 10.5 ms. This would result in a theoretical bus utilzation > 100%, which does not work.
If the MD4 is showing this symptom when nothing is connected:
Do you have CAN-C in the MD4 set to Terminate=No ?
In version 5.00, there is a problem with how wiring errors on CAN-C/D are handled. A problem like short to ground or an un-terminated CAN bus may instead of the expected CAN critical error result in a buildup of outgoing messages. This would then result in the same error as mentioned above.
The problem with the MD4 handling of CAN-C/D errors is solved in 5.01. Version 5.01 is scheduled for release in early November.
It looks like a mistake in the IQANdesign application, where the names of MDGN channels do not match the values they are measuring.
Check that the value property on the MDGN:s match the name, either by looking at the channel or by looking at the block diagram.
Thank you all for reporting this problem, we also saw MD4 returns via the regular after-sales channels.
In the investigation that followed, we found out that the failure of the RTC and the internal temperature measurement relate to a damaged RTC component, when the RTC failed it also affected the communication with the temperature sensor.
A solution was implemented in MD4 production in September, the solution is giving better protection of the RTC component, making it less prone to failure.
Customer support service by UserEcho